Media




AERA DEMOS


AERA Agent S1 Interviewing a Human

After having observed two humans interact in a simulated TV interview for some time, the AERA agent S1 takes here the role of interviewer, continuing the interview in precisely the same fasion as it did when done by the two humans (for other examples of the skills that S1 has acquired by observation, see videos HH.no_interrupt.mp4 and HH.no_interrupt.mp4 for the human-human interaction that S1 observed; see MH.no_interrupt_mp4 and HM_interrupt_mp4 ). In the "interrupt" scenario (MH_interrupt.mp4) S1 has learned to use interruption as a method to keep the interview from going over the allowed time limit.
Thumbnail of video
Video

AERA Agent S1 Being Interviewed by a Human

After having observed two humans interact in a simulated TV interview for some time, the AERA agent S1 takes the role of interviewee, continuing the interview in precisely the same fasion as before, answering the questions of the human interviewer (see videos HH.no_interrupt.mp4 and HH.no_interrupt.mp4 for the human-human interaction that S1 observed; see HM.no_interrupt_mp4 and HM_interrupt_mp4 for other examples of the skills that S1 has acquired by observation). In the "interrupt" scenario S1 has learned to use interruption as a method to keep the interview from going over a pre-defined time limit.
Thumbnail of video
Video

AERA agent S1 Uses Interruption to Meet Time Limits

Having observed a human interviewer use interruption to move the interview forward, S1 takes the role of interviewer and demonstrates the acquisition of this skill by interrupting the human interviewee to meet pre-defined time-limits for the interview. (See videos HH.interrupt.mp4 to see what S1 observed to learn this technique; see HH.no_interrupt.mp4 for the general human-human interaction that S1 learned interview skills from; see HM.no_interrupt_mp4 and HM_interrupt_mp4 for other examples of the skills that S1 has acquired automatically by observation).
Thumbnail of video
Video

Human-Human Interview demonstrating interruptions

This human-human interaction was observed by the S1 AERA agent (HH_interrupt.mp4) during its learning how to do TV-style interviews on recycling. Two humans, Kris and Eric, are in the interview. Their behaviors are being tracked in realtime by sensors, they speak to each other via microphones. The interation happens in realtime, just as in a remote meeting. S1 observes their movements and words, via simple off-the-shelf speech recognition software and prosody tracking, geared to produce time-accurate data about the session. After observing for 20 hours, S1 has learned how to achieve this task and can take over control of either avatar and let the interview carry on precisely the same fasion as before (see videos MH.no_interrupt.mp4, HM.no_interrupt.mp4, HM.interrupt.mp4). In the "interrupt" scenario S1 has learned, from these human-human interactions where interruption is demonstrated, through induction, to use interruption as a method to keep the interview from going over a pre-defined time limit).
Thumbnail of video
Video




METHODOLOGY


SAGE: Task-Environment For Evaluating a Broad Range of AI Learners

"SAGE: Task-Environment For Evaluating a Broad Range of AI Learners"
presented by L. Eberding. (link to paper).
Thumbnail of video
Video

From Constructionist to Constructivist AI: Architecture Matters

"From Constructionist to Constructivist A.I.: Architecture Matters" (link to 2009 paper; link to 2012 paper)
presented by Kristinn R. Thórisson. Part of the Constructivist A.I. Workshop 2011.
Thumbnail of video
Video

Growing Recursive Self-Improvers

"Growing Recursive Self-Improvers" (link to paper)
presented by Bas Steunebrink at the Ninth Conference on Artificial General Intelligence (AGI-16) in New York, co-authored by E. Nivel, J. Schmidhuber & K. R. Thórisson.
Thumbnail of video
Video

Why AI Needs a Task Theory and What it Might Look Like

"Why AI Needs a Task Theory — And What It Might Look Like" (link to paper)
presented by Kristinn R. Thórisson at the Ninth Conference on Artificial General Intelligence (AGI-16) in New York.
Thumbnail of video
Video

How to Define & Measure Understanding

Part of a session on Machine Understanding held at AGI17 (<a href="http://alumni.media.mit.edu/~kris/ftp/IJCAI17-EGPAI-EvaluatingUnderstanding.pdf>link to 2017 paper 1</a>, link to 2017 paper 2, link to 2017 paper 3)
Part 2 of "Do Machines Understand? A Short Review of Understanding & Common Sense in Artificial Intelligence" by K. Thorisson and D. Kremelberg
Thumbnail of video
Video

Discussions about Logic, Modeling, Simulating, and Replicating Human-Level Intelligence

Discussions following Dr. Selmer Bringsjords invited lecture: "Logic is the Key to Modeling, Simulating, and (Partially) Replicating Person-Level AGI (Artificial General Intelligence)" at the RU AGI Summer School in August 2012, supported in part by the 7th European Community Framework Programme.
Thumbnail of video
Video




BEIJING SUMMER SCHOOL AERA VIDEOS


AERA Session 1

Kris Thórisson's first lecture on AERA at the AGI Summer School of 2013 in Beijing in the afternoon of July 17.
Thumbnail of video
Video

AERA Session 2

Kris Thórisson's second lecture on AERA at the AGI Summer School of 2013 in Beijing in the afternoon of July 18.
Thumbnail of video
Video

AERA Session 3

Kris Thórisson's third lecture on AERA at the AGI Summer School of 2013 in Beijing in the afternoon of July 19.
Thumbnail of video
Video

AERA Session 4

Kris Thórisson's fourth lecture on AERA at the AGI Summer School of 2013 in Beijing in the afternoon of July 22.
Thumbnail of video
Video

AERA Session 5

Eric Nivel gives a fifth lecture on AERA at the AGI Summer School of 2013 in Beijing in the morning of July 24.
Thumbnail of video
Video

AERA Session 6

Eric Nivel gives a sixth and final lecture on AERA at the AGI Summer School of 2013 in Beijing in the afternoon of July 24.
Thumbnail of video
Video




INTERVIEWS & PRESENTATIONS


Can artificial intelligence become sentient?

They call it the holy grail of artificial intelligence research: Building a computer as smart as we are. Can artificial intelligence become sentient, or smarter than we are, and then what? Some say it could help eradicate poverty and create a more equal society – while others warn that it could become a threat to our very existence. But how far are we from reaching such “artificial general intelligence”? And what happens if machines, at some point, outsmart us? Deutsche Welle, July 14, 2022.
Thumbnail of video
Video

Program about AERA on Iceland’s National TV (RÚV)

A group of scientists from five European countries, led by an AI researcher from Iceland, has succeeded in what no one has succeeded in doing before: Creating a machine capable of programming itself. RÚV, Nov. 24, 2014.
Thumbnail of video
Video




OTHER VIDEOS


Introduction to the Icelandic Institute for Intelligent Machines (IIIM)

Dr. Thorisson discusses importance of AI and the part it plays in the Icelandic Institute for Intelligent Machines. The Institute's purpose, function, past- & current research and what the future will bring. Spring 2010.
Thumbnail of video
Video

Simultaneous Machine Learning of Many Diverse Tasks

"Simultaneous Machine Learning of Many Diverse Tasks"
Presented by Deon Garrett, IIIM
Part of IIIM & CADIA Open Day at Reykjavik University
May, 2012
Thumbnail of video
Video