Abstract
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a new benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
[ed. Click the pdf on the arXiv site for the full article. Apparently this is the acknowledged industry standard for assessing AI progress; a benchmark to determine when AI eventually reaches true Artificial General Intelligence (AGI). I don't have the ability to judge, but apparently the most powerful agent so far (as measured by this test) was just quietly revealed by OpenAI, the day before Christmas. o3- is a giant leap forward. See this post for everything we know so far (from 30,000 ft. up, to digging in the weeds): AI #96: o3 But Not Yet For Thee (DWAV). And: Time's Up for AI Policy (Miles Brundage):]
***
"The announcement of o3 today makes clear that superhuman coding and math are coming much sooner than many expected, and we have barely begun to think through or prepare for the implications of this (see this thread) – let alone the implications of superhuman legal reasoning, medical reasoning, etc. or the eventual availability of automated employees that can quickly learn to perform nearly any job doable on a computer.]"