Welcome to AERA
The Autocatalytic Endogenous Reflective Architecture (AERA) is a GMI-aspiring (general machine intelligence) system under development at the Center for Analysis & Design of Intelligent Agents (CADIA) at Reykjavik University and the Icelandic Institute for Intelligent Machines in Iceland (IIIM).
What is AERA? | How is AERA different from prior work? | How does AERA learn? |
AERA is a cognitive architecture - and a blueprint - for constructing agents with high levels of operational autonomy, starting from only a small amount of designer-specified code – a seed. Using a value-driven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers. | AERA demonstrates domain-independent self-supervised cumulative learning of complex tasks. Unlike contemporary AI systems, AERA-based agents excel at handling novelty - situations, information, data, tasks - that their programmers could not anticipate. It is the only implementable / implemented system in existence for achieving bounded recursive self-improvement. | AERA-based agents learn cumulatively from experience by interacting with the world and generating compositional causal-relational micro-models of its experience. Using non-axiomatic abduction and deduction, it constantly predicts how to achieve its active goals and what the future may hold, generating a flexible opportunistically-interruptable plan for action. |
Introduction
The Autocatalytic Endogenous Reflective Architecture addresses five principal features of autonomous control systems that are left both unaddressed and unaddressable by contemporary AI methodologies, techniques, and approaches: 1. The ability to operate effectively in environments that are only partially known beforehand at design time and that therefore contain significant amount of novel information; 2. A level of generality that allows a system to re-assess and re-define the fulfillment of its mission, in light of unexpected constraints or other unforeseen changes in the environment; 3. The ability to operate effectively in environments of significant complexity; 4. the ability to learn cumulatively - continuously and life-long; and 5. The ability to degrade gracefully – to continue striving towards achieving its main goals when resources become scarce, and in light of other expected or unexpected constraining factors that impede its progress.
AERA enables the creation of agents that get increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling themselves and their environment, as their experience accumulates. Based on principles of autocatalysis, endogeny, and reflectivity, AERA is an architectural blueprint for constructing systems with high levels of operational autonomy, starting from only a small amount of designer-specified code – a seed. Using a value-driven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers.
We have adopted a stringent definition for autonomy, that is, an autonomous system that is operationally and semantically closed. No system can be built (or build itself) completely from scratch and we assume that a system that is capable of general autonomous learning is “born” with some initial innate knowledge that allows it to bootstrap its interaction with the world – even if in a minimalistic way at first – to acquire more knowledge and improve its performance. Note that we do not consider any assumptions or requirements that pertain specifically to the biological reign: AERA is thus not “biologically inspired” in any important sense of that term, and it is not our aim to mimic in some way the particulars of human minds or biological systems.
Replicode
The programming language of AERA is Replicode. Replicode is a language specifically designed to encode short parallel programs and executable models, and is centered on the notions of extensive pattern-matching and dynamic code production. The language is domain-independent and has been designed to build systems that are goal-driven and goal-bounded, as a kind of production system that can modify its own code. Replicode itself is written in C++.
Methodology
The only way to address the challenge of general machine intelligence (GMI) is replacing the prevailing top-down hand-coded architectural design approach with methods that allow a system to manage its own cognitive growth. This calls for a fundamental shift to self-organizing principles and self-generated code – what we call a constructivist AI approach (CAIM), in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI are very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cognitive architecture as cognitive modules, they must instead address the principles – the “seed” – from which a cognitive architecture can automatically grow.
AERA is based on our novel methodological principles for addressing the shortcomings mentioned in the Introduction above, a new constructivist AI approach (CAIM) defined by Dr. Thórisson and described in several papers (for an overview, see his paper from 2012). In sharp contrast to most other approaches, instead of ignoring constraints of time and knowledge, or treating them as secondary concerns, we make these of central importance. CAIM proposes an autonomic approach which, unlike traditional software development methodologies and similar allonomic approaches to AI development, rests on a unified theory of meaning generation, understanding, grounding, and causal reasoning.
History
AERA was originally conceived of and designed by Eric Nivel and Dr. Kristinn R. Thórisson, in collaboration with researchers from several instutitions including IDSIA (Switzerland), Palermo University (Italy), Insitutute for Cognitive Sciences (Italy), Communicative Machines (UK), and Polytechnic University of Madrid (Spain).
AERA has been funded by the European Union under the HUMANOBS project (https://cordis.europa.eu/project/id/231453) (2009-2012), Reykjavik University, the Icelandic Institute for Intelligent Machines (2012-), and by Cisco Systems (2019-).