When Simple Solutions Meet Complex Realities

We live in an age where the most pressing challenges emerge from the intersection of multiple complex systems—where technological disruption meets social change, where economic pressures interact with environmental constraints, and where local decisions can trigger global consequences. These multifaceted problems resist the reductionist approach of breaking them down into manageable parts, because their most important characteristics arise from the interactions between components, not from the components themselves. 

Consider the current state of artificial intelligence. It’s worth noting that most AI innovations have occurred in well-defined tasks like image recognition, speech processing, and translation—domains with clear parameters and measurable outcomes. But what about situations humanity has never encountered before, or tasks involving multiple complex systems interacting in unpredictable ways? Think about simulating national defense scenarios involving cyber warfare, predicting cascading environmental effects across ecosystems, or managing global pandemic responses where biological, social, and economic factors intertwine.

We need to think beyond disciplinary boundaries and experiment with new approaches to studying and embracing complexity as it actually exists. While modern societies are structured in ways that encourage us to simplify everything for the sake of manageability, this approach increasingly falls short when the very thing we’re trying to understand is the unpredictable emergence that comes from interactions between complex systems.


Learn from the first 3 examples and complete the missing piece. Easy for children, unsustainably expensive for AI. Why?


Current AI, essentially synonymous with large language models (LLMs), faces fundamental limitations in unpredictable scenarios. These systems can only make inferences from patterns they’ve already seen in their training data. When confronted with genuinely novel situations or complex system interactions that haven’t been explicitly modeled, they struggle to generate meaningful insights, instead defaulting to plausible-sounding but ultimately inadequate responses based on superficial pattern matching.


“We need AI that can help us imagine system designs that embrace complexity rather than reduce it, drawing inspiration from the adaptive processes that allow natural systems to thrive in uncertainty.


As AI continues to advance, there’s a growing need for systems that can transcend these limitations and venture into truly uncharted territory—generating novel ideas, exploring unknown domains, and creating solutions that aren’t merely sophisticated recombinations of existing patterns. We need AI that can help us imagine system designs that embrace complexity rather than reduce it, drawing inspiration from the adaptive processes that allow natural systems to thrive in uncertainty.

Yet AI development is currently dominated by an engineering-driven strategy of improving performance by increasing computing resources and data volume. The “scaling strategy” of bigger models fed with bigger data has become commonplace, leading to a kind of  computational arms race, consuming enormous resources while potentially moving us further from AI’s original promise of creating genuinely intelligent, adaptive systems.

This is where the concept of “open-endedness” enters the conversation.


A Different Way Forward

Open-endedness isn’t actually a new concept. Researchers have been exploring it for over four decades, particularly in the field of artificial life (ALife), but it’s gaining renewed attention as we push against the boundaries of conventional AI approaches. At its core, open-endedness refers to an AI system’s ability to autonomously explore unknown problem spaces and continuously generate new ideas and solutions, even without being guided by clear goals or objectives.

To grasp what makes open-endedness special, it helps to contrast it with today’s LLMs. While these approaches can be complementary, they operate on fundamentally different principles:

For decades, traditional AI research, including  large language modeling, has been guided by a singular principle: optimize for a given objective function. Consider a simple example: If we task an AI with “running as fast as possible,” speed becomes the objective function. The system will naturally gravitate toward optimizing familiar locomotion patterns, perhaps refining a standard bipedal gait to make it incrementally faster. What it won’t do is spontaneously invent an entirely new way of moving, such as switching from running to rolling or developing an unprecedented form of locomotion that humans never considered.

This limitation sparked an innovative breakthrough in open-endedness research: Novelty Search. Developed by Kenneth Stanley and his colleagues, Novelty Search deliberately abandons conventional goal-oriented approaches. Instead of pursuing a clear objective, it uses “novelty” itself as the primary evaluation metric. By rewarding solutions that differ significantly from what has been seen before, regardless of whether they immediately advance toward the ostensible goal, Novelty Search prevents convergence on predictable solutions and enables exploration of a vastly wider solution space.


Out of the Box: Breaking Free from Objective Functions

Despite its potential, Novelty Search initially faced resistance from the AI community. The idea of prioritizing novelty over progress toward a stated objective runs counter to deeply ingrained problem-solving instincts. For Professor Mizuki Oka, Representative Director at ALife Institute, this paradox became a source of inspiration rather than frustration. After beginning her career in data mining and machine learning, she found herself increasingly disillusioned by the narrow focus on incremental improvements in algorithm accuracy. As she shared with me: “I really got frustrated by having to come up with an algorithm that’s maybe slightly 0.1% increase in accuracy given a specific target, which is usually a benchmark. The goal is set by somebody else, and your job as a student or as a junior researcher is to come up with better algorithms. I just could not find that exciting, or I could not imagine doing that for the next 30 years as a researcher.”

Mizuki was drawn to Artificial Life (ALife) because it offered a completely different approach to computing and intelligence compared to traditional machine learning. ALife presented a radical alternative: creating systems without predefined purposes and observing what behaviors or futures might emerge. “It’s more like a very exploratory kind of approach,” she notes. “You make something and you let them run, and you’re successful if the system produces something that excites you or that surprises you.”Today, Novelty Search is gradually moving from the theoretical realm toward practical applications.

Interestingly, one of AI’s most celebrated achievements (AlphaGo’s stunning victory over Lee Sedol in 2016) already demonstrated aspects of open-endedness when it produced moves like the famous “Move 37” that surprised even its creators. While not explicitly designed as an open-ended system, AlphaGo’s ability to discover strategies no human had conceived before offered an early glimpse of what’s possible when AI ventures beyond human priors. More recently, emerging projects like SakanaAI’s AI Scientist and Google’s AI Co-Scientist have begun explicitly incorporating exploration-driven mechanisms inspired by principles from Novelty Search, emphasizing novelty and creativity in automated scientific discovery. The recent release of AlphaEvolve, a descendent of AlphaGo, further demonstrates this trend by combining reinforcement learning and evolutionary methods with the typical LLM software stack, showing promising results even in restricted domains with clear objective functions. These developments highlight how exploration-focused approaches can be more fruitful than traditional optimization methods in certain circumstances—particularly when the goal is to discover truly novel solutions in complex, unbounded domains.


A Move Away From Intelligence

Importantly, what makes the open-endedness approach so profound is its connection to nature’s own innovation engine: evolution. In the natural world, organisms don’t optimize toward predefined goals. There’s no universal objective function driving the development of species. Instead, living things freely adapt their traits, transform their behaviors, and discover new survival strategies in response to changing environments. This open-ended evolutionary process has produced an astonishing diversity of solutions to life’s challenges—solutions no engineer could have designed through direct optimization.

Evolution in nature has produced a succession of creatures and functions more diverse than we can imagine, demonstrating that there exists a vast creative space beyond what intelligence alone can achieve. This suggests that by accelerating and reproducing evolutionary processes with computational resources, we can promote the discovery of innovative ideas and technologies that have never been seen before.

AI researchers have developed several frameworks to harness this evolutionary principle. For example, evolutionary algorithms and genetic algorithms attempt to mimic natural selection’s ability to generate novel solutions without being constrained by narrow objectives. More specialized approaches like NEAT (NeuroEvolution of Augmenting Topologies) take this further by evolving the very structure of neural networks, not just their weights but their entire architecture. Similarly, CPPNs (Compositional Pattern Producing Networks) provide methods for indirectly encoding complex structures, allowing for the emergence of sophisticated patterns through relatively simple evolutionary processes.


“Evolution in nature has produced a succession of creatures and functions more diverse than we can imagine, demonstrating that there exists a vast creative space beyond what intelligence alone can achieve.”


These evolution-inspired techniques share a common thread: they all prioritize exploration over exploitation, diversity over optimization, and novelty over incremental improvement. In doing so, they offer promising pathways toward achieving true open-endedness in artificial intelligence. That is, systems that can continually generate unexpected solutions and venture into genuinely uncharted territory. This represents a profound shift away from intelligence as the only source of innovation.


A Synergetic Relationship

Open-endedness and artificial intelligence exist in separate dimensions that nonetheless intersect and influence each other in powerful ways. While open-endedness functions more as a philosophical direction or conceptual goal (as glimpsed in AlphaGo’s surprising moves), AI provides the practical tools and frameworks that help achieve this vision.

This relationship works bidirectionally. Open-endedness pushes AI beyond its current limitations, while advances in LLMs have dramatically accelerated our ability to implement genuinely open-ended systems. The most significant breakthrough is LLMs’ ability to work with “ambiguous and subjective evaluation criteria,” something earlier AI couldn’t handle, as they required clear, quantifiable objectives to optimize toward.

The OMNI-EPIC study demonstrates this perfectly. By setting “interestingness” as the objective function, researchers created an AI system that could autonomously generate solutions humans found genuinely intriguing. This represents a fundamental shift, as concepts like “interestingness” were previously too subjective and abstract to function as effective objectives, such that traditional algorithms simply couldn’t quantify them.

Through exposure to vast amounts of human-generated content, LLMs have implicitly learned subjective value criteria that align with human perceptions of interest, surprise, and aesthetic appeal. They can interpret and apply these ambiguous evaluation metrics in ways earlier systems couldn’t approach. This enables AI to search not just for abstract novelty, but for directions humans find meaningful and valuable, generating innovative ideas we might never have conceived ourselves, yet remain interpretable. This is the essence of true open-endedness.

LLMs have also addressed another major challenge in open-endedness: solution “mutation.” Many open-endedness algorithms draw inspiration from evolutionary processes, relying on mutations to existing solutions to generate new possibilities. Traditionally implemented through random modifications, this approach was computationally inefficient and often produced meaningless variations.

What LLMs enable is “smart mutation.” Rather than blind, random changes, these models introduce modifications that are likely to be both novel and meaningful. This drastically increases the probability of generating innovative solutions while reducing wasted computation. Where previous systems might require thousands of random modifications to find one promising variation, LLMs can make targeted, contextually appropriate changes with a much higher success rate.

This marriage of open-endedness philosophy with LLM capabilities points toward AI systems that can continually surprise us—not with random novelty, but with genuinely original solutions that expand our understanding of what’s possible.


Applications and Future Directions

The potential applications for open-endedness are particularly compelling in domains where existing AI systems like LLMs clearly show their limitations. Social systems and security scenarios involving multiple complex systems present fascinating use cases. Imagine simulating national defense scenarios to help governments navigate unprecedented geopolitical challenges, or modeling how social movements might evolve across interconnected digital and physical spaces.

Beyond these complex policy applications, open-endedness could transform more familiar domains. In gaming and entertainment, we could see systems that automatically generate new weapons, characters, and entire worlds, creating truly “open-ended” experiences where players encounter genuinely surprising  scenarios rather than variations on familiar themes. Educational applications hold equal promise—while current AI cannot match the boundless creativity of children, open-ended systems might enable educational experiences that adapt and evolve in real-time, generating novel challenges and explanations tailored to individual learning journeys.

Perhaps most exciting is the potential for medical and scientific research that goes beyond laboratory automation. Open-ended AI systems might generate entirely new hypotheses, suggest unexpected research directions, or identify patterns and possibilities that human researchers haven’t considered, contributing to genuinely uncharted scientific discoveries.


“Open-endedness can enrich public discussions about AI’s role in society by expanding our understanding of intelligence beyond narrow, human-centric definitions.”


Of course, novelty alone doesn’t guarantee benefit, and open-endedness raises important ethical considerations. The key challenge lies in ensuring that AI creativity remains comprehensible and valuable to humans. We need systems that can be autonomous and creative while remaining fundamentally aligned with human values and understanding. Achieving this balance will require careful consideration of how we design the feedback mechanisms and evaluation criteria that preserve human agency while enabling meaningful exploration of novel solutions.

Looking ahead, open-endedness can enrich public discussions about AI’s role in society by expanding our understanding of intelligence beyond narrow, human-centric definitions. Rather than framing AI development as an either-or proposition—either AI threatens to replace humans or it serves as our obedient tool—open-endedness suggests a more nuanced relationship where different forms of intelligence coexist and complement each other. This perspective can help move public discourse away from simplistic narratives toward a more sophisticated understanding of technology and its potential.


Toward a Pluralistic AI Future

The future of AI likely lies not in any single paradigm, but in a rich ecosystem where different approaches complement and enhance each other. Open-ended systems would contribute their unique capacity for exploration and novelty generation, operating alongside more traditional optimization-focused approaches, reinforcement learning systems, and symbolic reasoning frameworks. This pluralistic approach acknowledges that intelligence itself is multifaceted. Sometimes we need precise calculation, sometimes creative leaps, and often a dynamic interplay between exploration and exploitation. By embracing open-endedness as one essential component of this broader AI ecosystem, we can build technological systems that are more resilient, more creative, and better equipped to handle the unpredictable challenges of an increasingly complex world.

We are at the verge of an evolutionary transition in intelligence, and we need as many people as possible to participate with as diverse objectives as possible. Without this diversity of participation and purpose, we risk heading into a future where one form of intelligence becomes the oppressor of the other. Open-endedness offers a path toward a more collaborative, pluralistic relationship between human and artificial intelligence—one that celebrates rather than eliminates the differences that make different forms of intelligence valuable.




Joseph Park is a researcher at DAL (joseph@dalab.xyz)


Illustration:  Asuka Zoe Hayashi
Edits:  Janine Liberty