Historic and important/interesting work. Let me know what I am missing

AGI is not a single thing. Intelligence has many facets which will be reached at different times by different software, DeepMind proposes a framework to measure how far along we are, similar to the SAE levels of autonomous driving.

Levels of AGI: Operationalizing Progress on the Path to AGI
Meredith Ringel Morris1 , Jascha Sohl-dickstein1, Noah Fiedel1 , Tris Warkentin1, Allan Dafoe1, Aleksandra Faust1, Clement Farabet1 and Shane Legg1
1Google DeepMind
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence(AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose “Levels of AGI” based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and
emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems

Agents are the next big thing. From Singapore, Hong Kong, UCLA. When the AI is embodied in an active device, you have a robot. Also coming soon

Penetrative AI: Making LLMs Comprehend the Physical World

Huatao Xu†, Liying Han§, Mo Li∗†, Mani Srivastava§

∗Hong Kong University of Science and Technology,

†Nanyang Technological University, §University of California Los Angeles

Email:huatao001@ntu.edu.sg, {liying98, mbs}@ucla.edu, lim@cse.ust.hk

ABSTRACT

Recent developments in Large Language Models (LLMs) have demonstrated their remarkable capabilities across a range of tasks. Questions, however, persist about the nature of LLMs and their potential to integrate common-sense human knowledge when performing tasks involving information about the real physical world. This paper delves into these questions by exploring how LLMs can be extended to interact with and reason about the physical world through IoT sensors and actuators, a concept that we term “Penetrative AI”. The paper explores such an extension at two levels of LLMs’ ability to penetrate into the physical world via the processing of sensory signals. Our preliminary findings indicate that LLMs, with ChatGPT being the representative example in our exploration, have considerable and unique proficiency in employing the knowledge they learned during training for interpreting IoT sensor data and reasoning over them about tasks in the physical realm. Not only this opens up new applications for LLMs beyond traditional text-based tasks, but also enables new ways of incorporating human knowledge in cyber-physical systems.