⌨️

Rethinking AI: Beyond AGI – The Cognitive Cone

Beyond AGI – The Cognitive Cone

Expanding our understanding of AI progress through Michael Levin's cognitive cone framework.

Dec 14, 2024 – This text is mostly generated under tight supervision.

Artificial General Intelligence (AGI) is a vague concept, often debated over what truly qualifies as "general" intelligence. Some might argue that systems like ChatGPT have achieved AGI because they handle many tasks. However, these systems, like all intelligent ones, have limitations and aren't universally "clever." In other words, AGI is a poor metric for measuring progress and therefore cannot effectively guide development.

In his 2019 paper, "The Computational Boundary of a 'Self': Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition," Levin articulates this idea:

"Any Self is demarcated by a computational surface – the spatio-temporal boundary of events that it can measure, model, and try to affect. This surface sets a functional boundary—a cognitive 'light cone' which defines the scale and limits of its cognition."
image

Michael Levin’s concept of the cognitive cone, as visualized in the diagram, provides a powerful framework for understanding and guiding AI progress. It reframes intelligence not as a binary achievement but as a continuum defined by the breadth and depth of a system's perceptive and actionable range. Here’s how this perspective can shape how we think about AI development.

Application regarding AI development

Expanding and scaling cognitive boundaries

AI progress should be measured by how far its cognitive cone extends—its ability to perceive, process, and act across broader spatial and temporal scales. Intelligence grows as systems integrate predictive modeling, memory, and goal adaptability, enabling them to address challenges like global systems coordination or long-term planning. While early systems had narrow, task-specific cones, modern AI demonstrates broader adaptability but still lacks strategic foresight.

Compound and distributed intelligence

Levin’s concept of compound intelligence suggests that interconnected AI agents can collaborate to form emergent, larger-scale cognition. For instance, networks of autonomous systems could collectively address challenges beyond the scope of individual agents. This may seem obvious, but I'll challenge this view in another post.

Dynamic and flexible cones

Advanced AI systems should be capable of adjusting the scope of their cognitive cone dynamically in response to changing goals, environments, or contexts. This adaptability allows the system to expand its focus to address long-term, large-scale challenges or narrow it for immediate, localized tasks. I don't think current AI systems have shown this trait yet.

Beyond human intelligence

AI need not mimic human cognition to be advanced. In fact, AI already surpassed some human limitations since the calculator. It could surpass more human limitations by developing cognitive cones capable of handling planetary-scale data or operating in domains inaccessible to humans.

Application regarding AI product

Intelligence doesn’t exist in a vacuum—it depends on what a system can perceive, understand, and do. Evaluating how "intelligent" a system is without considering its ability to access knowledge or take meaningful actions misses the point.

If you’re building something based on AI, think about the cognitive cone of your system. What can it perceive, how aware of the past is it? How much of the future can it predict? How far does its influence extend?

Focus on expanding it—whether by increasing its access to data, enabling it to operate in new domains, or empowering it to act meaningfully on its insights. A well-designed cognitive cone isn't just about intelligence; it's about creating systems that deliver real value. While this may sound like a cliché, it becomes more meaningful when we consider the simplest definition of intelligence: the ability to achieve goals.

Beyond AGI – The Conitive Cone

Expnding our understanding of AI progress through Michael Levin's cognitive cone framework.

Dec 14, 2024 – This text is mostly generatunder tight supervision.

Artificial General Intelligence (AGI) is a vague cncept, often debated over what truly qualifies as "general" intelligence. Some mht argue that systems like ChatGPT have achieved AGI because they handle many tasks. However, these systems, like all intelligent ones, have limitations and aren't universally "clever." In oter words, AGI is a poor etric for measuring progress and therefore cannot effectively guide development.

In his 2019 paper, "The Computational Boundary of a 'Self': Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition," Levi articulates this idea:

"Any Self is demarcated by a computational surface – the spatio-temporal boundary of events that it can measure, mol, and try to affect. This surface sets a functional boundary—a cognitive 'light cone' which defines the scale and limits of its cognition."