Former OpenAI researcher foresees AGI reality in 2027
Leopold Aschenbrenner, a former safety researcher at ChatGPT creator OpenAI, has doubled down on artificial general intelligence (AGI) in his newest essay series on artificial intelligence (AI).
Dubbed “Situational Awareness,” the series offers a glance at the state of AI systems and their promising potential in the next decade. The full series of essays is collected in a 165-page PDF file updated on June 4.
In the essays, the researcher paid specific attention to AGI, a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks. AGI is one of many different types of artificial intelligence, including artificial narrow intelligence, or ANI, and artificial superintelligence, or ASI.
“AGI by 2027 is strikingly plausible,” Aschenbrenner declared, predicting that AGI machines will outpace college graduates by 2025 or 2026. He wrote:
“By the end of the decade, they [AGI machines] will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed [...]”
According to Aschenbrenner, AI systems could potentially possess intellectual capabilities comparable to those of a professional computer scientist. He also made another bald prediction that AI labs would be able to train general-purpose language models within minutes, stating:
“To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.”
Predicting the success of AGI, Aschenbrenner called on the community to face the reality of AGI. According to the researcher, the “smartest people” in the AI industry have converged on a perspective that he calls “AGI realism,” which is based on three foundational principles tied to the national security and AI development of the United States.
Related: Former OpenAI, Anthropic employees call for ‘right to warn’ on AI risks
Aschenbrenner’s AGI series comes a while after he was reportedly fired for alleged “leaking” of information from OpenAI. Aschenbrenner was also reportedly an ally of OpenAI chief scientist Ilya Sutskever, who reportedly participated in a failed effort to oust OpenAI CEO Sam Altman in 2023. Aschenbrenner’s latest series is also dedicated to Sutskever.
Aschenbrenner also recently founded an investment firm focused on AGI, with anchor investments from figures like Stripe CEO Patrick Collison, his blog reads.
Magazine: Crypto voters are already disrupting the 2024 election — and it’s set to continue