Artificial General Intelligence (AGI)

Artificial general intelligence (AGI) is conventionally defined as an AI system that can perform any intellectual task a human can perform. In practice, definitions vary widely: AGI has been defined as passing the Turing test, achieving human-level performance on a standardised benchmark suite, or being capable of running a successful business autonomously.

Jensen Huang’s Definition and Claim (2026)

In Lex Fridman Podcast #494 (2026-03-23), jensen-huang states: “I think we’ve achieved AGI.”

His reasoning is definitional: if AGI means an AI system capable of starting, growing, and running a billion-dollar company — even briefly — then current systems qualify. During the internet era, many websites exceeded $1B in transient value with business models no more sophisticated than what openclaw-style agentic-ai systems can generate today. A Claude-type agent could conceivably build a viral web service that monetises briefly and then winds down.

Huang is careful to separate this claim from a claim about conscious agency or human-equivalent general purpose reasoning.

Intelligence vs. Humanity

Huang draws a sharp distinction:

  • Intelligence — functional: perception, understanding, reasoning, planning. Already commoditised. Huang is “surrounded by people more intelligent than me in every domain” and yet leads them as CEO, suggesting intelligence is not the scarce resource for leadership or impact.
  • Humanity — character, compassion, determination, tolerance for embarrassment, subjective experience. Not commoditisable. The word society should elevate is “humanity,” not “intelligence.”

He predicts AI will celebrate humans more, not less, by making intelligence cheap and thereby revealing what humans uniquely offer: embodied experience, moral character, and creative agency.

AGI Timeline for Running a Tech Company

When Fridman asks whether an AI could run an NVIDIA-scale company, Huang is sceptical: “The odds of 100,000 agents building NVIDIA is zero percent.” Building a durable, complex organisation across decades requires sustained leadership judgment, relationship-building, belief-shaping, and tolerance for long-term uncertainty that current systems cannot replicate.

Lambert and Raschka’s Assessment (Early 2026)

nathan-lambert and sebastian-raschka take a more cautious view in Lex Fridman Podcast #490 (fridman-lambert-raschka-2026-state-of-ai):

Jagged Capabilities

AI capabilities are jagged: superhuman at some programming tasks (leetcode-style problems, certain code types), poor at distributed ML systems engineering, safety-critical infrastructure, and novel research. The term “AGI” implies smooth general capability; the actual capability profile has peaks and valleys. Clean AGI definitions are rejected as premature.

The AI 2027 Report

Lambert and Raschka engage with the “AI 2027” report, which predicted a superhuman coder by 2027–28 (later revised to 2031 mean). They praise it for concrete, falsifiable milestones — which is rare in AGI discourse — but dispute the underlying singularity assumptions (that one capability milestone unlocks rapid self-improvement to recursive general intelligence).

Lambert’s Timeline

  • Software automation of narrow tasks: this year (2026)
  • Automated AI researcher: 5–10 years out
  • Full recursive self-improvement / hard AGI: unspecified / sceptical

Raschka’s Framing

Raschka describes the AGI dream as “amplification not paradigm change” — continuous improvement along known curves, not a step-function discontinuity. The word “AGI” bundles too many distinct capabilities into one label.

”The Dream of AGI is Kind of Dying”

Lambert’s most provocative claim: the dream of one model solving everything is dying. Specialised models — trained on domain-specific data, evaluated on domain-specific benchmarks, deployed with domain-specific tool access — will dominate. The future is model pluralism, not a single superintelligent system.

Comparative Summary

SourceClaim
jensen-huang (#494)AGI already achieved by functional definition; runs a billion-dollar company test
nathan-lambert (#490)AGI definition is incoherent; capabilities are jagged; specialised models will dominate
sebastian-raschka (#490)AGI is amplification, not paradigm change; continuous improvement, no step function

Sources: fridman-huang-2026-nvidia-ai-revolution | fridman-lambert-raschka-2026-state-of-ai