Nvidia CEO Jensen Huang has made one of the boldest claims in AI right now. Speaking on the Lex Fridman podcast, he said, “I think we’ve achieved AGI.” That statement alone is enough to grab attention, but it also raises a bigger question. What exactly counts as AGI in the first place?
The term has been used heavily across the tech industry, but it still does not have a clear, shared definition. It is generally understood as AI that can match or surpass human intelligence across tasks, but how you measure that remains up for debate.
What is AGI, and why no one agrees on it?
AGI, or artificial general intelligence, is usually described as AI that can perform tasks at a human level across different domains. In simple terms, it is not limited to one job. It should be able to learn, adapt, and handle different kinds of work without needing retraining.
During the podcast, Fridman described AGI as a system that could effectively do your job, even building and running a billion-dollar company. That sounds simple, but the lack of a strict definition has made AGI a moving target.
This is also why the term has become controversial. Some companies are moving away from the term AGI and creating new labels, such as Amazon’s “useful general intelligence,” or Microsoft’s “Humanist Superintelligence (HSI),” even if they mean similar things.
The stakes are high, as the definition of AGI is also tied to major business agreements between companies like OpenAI and Microsoft.
Why Huang thinks we are already there

Huang pointed to the rise of AI agents as a sign that AGI is already here. He mentioned platforms like OpenClaw, where people are building agents that can perform tasks, create content, and even drive social experiences.
He suggested these agents could spark unexpected successes, like new social apps or digital influencers that grow rapidly. But he also acknowledged the limits. Many of these projects lose momentum quickly, and he admitted that the chances of thousands of agents building something like Nvidia are essentially zero.
That is where the debate begins. Some see current AI as powerful but far from general intelligence, while others believe we are already crossing that line. Last year, Google DeepMind said it could be here by 2030.
However, the father of quantum computing, David Deutsch, believes that true AGI will not just be software, but something closer to a person capable of independent thought and reasoning
For now, Huang’s statement says more about how fast AI is evolving than it does about a clear arrival point. You may already be using tools that feel smarter than ever, but whether that counts as AGI is still very much up for debate.
