Towards Artificial Life

A Compositional, Biologically-Inspired Framework for Grounded Intelligence.

Today's AI, while powerful, is fundamentally limited. They are "word models," not "world models," lacking true understanding and grounding in physical reality. My work is driven by a more ambitious goal: to understand the principles of life and consciousness, and to create truly autonomous systems that learn, adapt, and interact with the world.

This presentation outlines my vision to contribute to the collective human effort of building systems that are not just intelligent, but in a meaningful sense, alive.

Core Philosophy

Guiding Principles for a New AI

My approach is built on three core principles that form the foundation for a new generation of intelligent systems.

public

Grounding through Interaction

Intelligence isn't programmed; it emerges. A system can only build a meaningful model of the world by actively interacting with it, aligning with the idea that any persistent system must model its environment.

layers

Compositionality at Scale

Complex systems are built from simpler parts. The key to scalable intelligence is composing simple, robust primitives into increasingly complex and abstract representations, mirroring biology and physics.

bolt

Prediction as a Driving Force

Learning is a process of prediction. An agent constantly predicts sensory input, and prediction errors are the primary signal driving the refinement of its internal world model, aligning with the Free Energy Principle.

Methodology

A Two-Pronged Approach to Building a Brain

A key direction in AI research is the move towards "object-centered physical models," a view I share. The question is how to derive them. My research investigates a complementary, bottom-up path to this goal.

Top-Down Goal

The "What": An agent with a high-level architecture for intelligence.

  • Modality-Agnostic Core
  • Active Feedback & Continuous Learning
  • Hierarchy of "Supervisors" for self-awareness

Bottom-Up Primitive

The "How": An atom of intelligence inspired by biology.

  • Formal Neuron Model
  • Rich Dynamical System
  • Emergent Learning & Representation

Converging on: Grounded Intelligence & World Models

A Point of Synergy

These are not conflicting views, but complementary levels of abstraction. My hypothesis is that a network of powerful, bottom-up units will, through learning, discover and form the very top-down, object-centered representations we seek. It's an investigation into how a world model might emerge.

The Atom of Intelligence

The Formal Neuron Model

My model is a departure from traditional ANNs, designed to support the core principles with a richer set of computational capabilities.

Graph-Based Architecture

The neuron itself is a micro-graph, allowing for complex internal signal integration and meaningful propagation delays.

Vector-Based Communication

Synapses transmit rich, multi-dimensional vectors, not just single weights, enabling more expressive communication.

Inherent Metaplasticity

Features context-dependent metaplasticity, where neuromodulatory signals can change the learning rules themselves in real-time.

Predictability at its Core

Learning rules are based on temporal prediction, constantly striving to match incoming signals with the neuron's own state.

The Path Forward

A Phased Research Roadmap

A clear, methodical plan from foundational validation to long-term hardware co-design.

Current Status: A comprehensive formal model has been developed, with a proof-of-concept Python simulation validating the core dynamics and stability. The complete implementation is available as an open-source framework for neural experimentation.