Frequently Asked Questions

Our methodology is based on a new constructivist A.I. approach, defined by Dr. Thórisson in his keynote paper from 2009 on the subject. In sharp contrast to most other approaches, instead of ignoring constraints of time and knowledge, or treating them as secondary concerns, we make these of central importance. Our working definition of intelligence was proposed by Pei Wang, which states that intelligence is “to adapt with insufficient knowledge and limited resources”. We aim at building intelligent machines (a) that operate in open-ended environments and in real-time, (b) whose cognitive architecture is completely domain-independent and, (c) that exhibit self-programming abilities. AERA-based systems learn by observing intentional agents in their environment, and develop experience-based semantics automatically. The only form of knowledge given to them by their programmers – their innate knowledge and drives, so to speak – is provided in the form of a tiny amount of bootstrap code. This idea of course is not new. What is new is the integration of such an approach in a coherent and unified reflective real-time architecture that can be implemented and run.

The main challenge was to design the system following simultaneously two opposite directions: top-down, from the specification of the desired cognitive functions to the implementation, and bottom-up, to drive a synergetic implementation towards meeting a set of stringent requirements that we believe al systems must ultimately meet to be capable of general intelligence. Present mainstream software engineering methodologies don’t support our architectural approach. Our approach relies on vast amounts of parallel fine-grained processes that must be coordinated precisely and efficiently: this is very different from the classic way of engineering large systems, and this raised issues which we could find no support for in existing design/coding toolboxes or methods. Add on top of this that our systems operate with temporal precision around a few milliseconds and you get an idea of how the difficulty of this engineering challenge – and what it may be like to debug a large-scale system like this. In the process we had to solve several issues related to these facts, but many are only partially solved. Since existing logics are either axiomatic, ignore realtime altogether, or both, we invented new principles for non-axiomatic realtime reasoning.

AERA stands for Auto-catalytic Endogenous Reflective Architecture. We have adopted a stringent definition for autonomy, that is, an autonomous system that is operationally and semantically closed. In our context, auto-catalysis refers directly to the operational closure – the ability for a system to expand and modify its own internal agency by means of the operation of said agency (its own architectural structure). Reflectivity refers to the semantic closure, i.e. the ability for a system to control the (re)organisation of its agency. An AERA-based system is also endogenous in the sense that it is (a) self-maintained and, (b) originates from itself through interaction with its environment – the processes implementing the two aforementioned closures are internal and cannot be modified by anyone else but the system itself.

Replicode has been designed to address our specific requirements, in the main:

  1. ease the programming and control of large populations of parallel fine-grained data-driven processes,
  2. treat what the system itself does as first-class knowledge – to allow the system to reason about its own operation,
  3. implement the non-axiomatic logic mentioned earlier, along with the necessary reasoning mechanisms,
  4. represent knowledge as executable shared models – which means essentially that things in the world, which the system can think about, are defined by what can be done with them or about them, what other agents do with them, or what the system can predict about them,
  5. drive knowledge formation by goals and predictions,
  6. allow the formation of dynamic model hierarchies – dynamic because models come and go as they are learned, defeated, or put out of context; hierarchies because predictions flow up the system's abstraction hierarchy while goals flow down,
  7. simulate the outcome of different possible courses of action and commit to the plausibly most appropriate one,
  8. allow “throttling-down” some cognitive tasks when resources become scarce to focus on other tasks considered by the system as being more critical at any moment in time,
  9. operate in soft real-time and,
  10. allow the distribution of the computation load over a cluster of computers.