The claim that AGI is just a few years away is built on hype rather than hard evidence. While AI has made impressive strides in language processing, reasoning, and tool use, it is still fundamentally a pattern-matching machine. These models don’t “think” in the way humans do—they recognize statistical relationships in data and predict likely outputs. Passing benchmarks or excelling in narrow tasks doesn’t mean AI has achieved general intelligence.
A key flaw in your argument is the assumption that scaling up compute and data will inevitably lead to AGI. While increased computing power has driven improvements, it doesn’t solve fundamental challenges like common sense reasoning, causal understanding, or autonomous goal-setting. These limitations suggest that AGI requires more than just “turning the crank” on existing architectures—it likely demands breakthroughs in how machines reason, learn, and interact with the world.
Additionally, you do little more than repackage Sam Altman’s predictions as if they are objective truths. There is no original critique or independent thought in your "blog"—just a restatement of OpenAI’s latest talking points, dressed up with graphs and buzzwords. By blindly echoing Altman’s optimism without engaging with counterarguments or historical AI failures, your work reads more like a marketing pitch than a serious analysis. The AI industry has a long history of overpromising and underdelivering on AGI, yet the author presents the same old “it’s just around the corner” narrative as if it’s groundbreaking.
Ultimately, the assumption that AGI is inevitable in the near future is based more on wishful thinking than concrete evidence. Until AI demonstrates actual general intelligence—like learning new tasks without retraining, reasoning beyond statistical patterns, and adapting to unpredictable situations—claims of AGI being imminent should be treated with skepticism.
"it is still fundamentally a pattern-matching machine" To be fair, human thinking is largely a pattern-matching process too.
AI just a machine process and a tool. It's not necessary for AI to think exactly like humans for AI to achieve human-level performance on reasoning or intellectual tasks, any more than a submarine needs to swim like a fish to move through water.
"it doesn’t solve fundamental challenges like common sense reasoning, causal understanding, or autonomous goal-setting." And yet when benchmarks to assess such capabilities, like ARC-AGI, AIME 2024 are set up, the AI reasoning models show significant progress on them.
The concrete evidence that AGI is coming soon is based on the progress of AI at the variety of tasks that define what we mean by AGI. That's why it's more a matter of definition. If we defined it by the Turing Test, we'd be there. Define it strictly and yes, you might argue it's 10+ years out. But the "AI that does most virtual work tasks at a median human level" seems to be achievable by scaling along the 3 dimensions of capability I mention.
Altman's 3 points include two points about AI scaling progress - the log scaling laws and massive drop in price of inference - that are already well-understood, backed by data and stated by myself over a year ago, so it's not surprising I'd echo them. it's not blind agreement but backed by data. His third point wrt value of higher-level AI I quibbled with and I'd like quantification.
In my opinion, which I expressed as early as march 2023, is that AI can be very transformative, valuable and disruptive even without AGI. It doesn't even require AGI to replace 20% or even 50% of the most mundane tasks in our economy and massively boost productivity. There's a lot of automation possible just with the AI video recognition, AI customer support bots, AI research assistant tools, AI coding copilots, AI writing tools, etc. As AI advances, it expands. I think lower-end tasks are more important to our economy than ASI capabilities because they are so much more common.
So, as I've stated before on my blog, reaching AGI is less important than assessing "what can AI do practically for me now and in near future?"
Finally, I do recognize - and have noted - that Altman and OpenAI, and all tech leaders, engage in hype. I aim to avoid cheerleading and hype, instead being on-the-level and fact-based. I've fallen for fake AI hype a few times (e.g. Devin and Rabbit), so I'm wary and will take the point in consideration. FWIW, I don't take what Altman says at face value, and filter his claims, but in this case, IMHO he assesses the state of things accurately. You have every right to have a different opinion, and we'll see how it shakes out.
Your response equates human thinking to AI pattern-matching, but this is a false equivalence. While humans do rely on pattern recognition, they also engage in causal reasoning, abstraction, and intuitive decision-making—things AI fundamentally lacks. A submarine and a fish both move through water, but intelligence isn’t a question of movement; it’s a question of how an entity understands and interacts with the world. AI may outperform humans in structured tasks, but that doesn’t mean it understands what it’s doing.
You also shift the goalposts on AGI by treating it as a matter of definition. If we redefine AGI to mean "AI that does most virtual work tasks," then sure, we might be close. But that’s not how AGI has historically been understood—AGI implies a machine with broadly human-like intelligence across diverse, open-ended domains. Simply excelling at curated benchmarks does not prove true general intelligence. These tests measure narrow competencies, not the ability to generalize, adapt, or autonomously set goals outside of predefined tasks.
Furthermore, pointing to AI’s rapid scaling and cost reductions as evidence that AGI is near is speculative. Past AI progress has often followed boom-and-bust cycles, with scaling eventually hitting limits that require paradigm shifts. We’ve seen immense progress in specific AI applications, but none of it guarantees that we’re just a few tweaks away from AGI.
Your argument also misframes the debate by focusing on AI's current usefulness. Nobody denies that AI has real-world applications, but that’s not the point in question. The issue is whether AGI is imminent, and there’s no solid evidence for that, only extrapolation from trends that may not hold indefinitely.
Finally, while you claim to be objective and skeptical of AI hype, your response largely echoes Altman’s narrative. Recognizing that tech leaders exaggerate is one thing, but continuing to rely on their arguments without deeper scrutiny suggests an implicit bias. True skepticism would demand stronger evidence than trend-based extrapolation before concluding that AGI is just around the corner.
Just curious, how deep is your actual hands-on experience with AI development?
Wrt to my experience in AI, I have a PhD in Computer Science and 35+ years experience in EDA (electronic design automation) and AI/ML/NLP, in various roles including as EDA project lead, ML engineer, technical manager, CTO, etc. In recent years, I've been writing about AI - here - while also using LLMs and AI tools/apps and building AI tools with AI frameworks.
The claim that AGI is just a few years away is built on hype rather than hard evidence. While AI has made impressive strides in language processing, reasoning, and tool use, it is still fundamentally a pattern-matching machine. These models don’t “think” in the way humans do—they recognize statistical relationships in data and predict likely outputs. Passing benchmarks or excelling in narrow tasks doesn’t mean AI has achieved general intelligence.
A key flaw in your argument is the assumption that scaling up compute and data will inevitably lead to AGI. While increased computing power has driven improvements, it doesn’t solve fundamental challenges like common sense reasoning, causal understanding, or autonomous goal-setting. These limitations suggest that AGI requires more than just “turning the crank” on existing architectures—it likely demands breakthroughs in how machines reason, learn, and interact with the world.
Additionally, you do little more than repackage Sam Altman’s predictions as if they are objective truths. There is no original critique or independent thought in your "blog"—just a restatement of OpenAI’s latest talking points, dressed up with graphs and buzzwords. By blindly echoing Altman’s optimism without engaging with counterarguments or historical AI failures, your work reads more like a marketing pitch than a serious analysis. The AI industry has a long history of overpromising and underdelivering on AGI, yet the author presents the same old “it’s just around the corner” narrative as if it’s groundbreaking.
Ultimately, the assumption that AGI is inevitable in the near future is based more on wishful thinking than concrete evidence. Until AI demonstrates actual general intelligence—like learning new tasks without retraining, reasoning beyond statistical patterns, and adapting to unpredictable situations—claims of AGI being imminent should be treated with skepticism.
"it is still fundamentally a pattern-matching machine" To be fair, human thinking is largely a pattern-matching process too.
AI just a machine process and a tool. It's not necessary for AI to think exactly like humans for AI to achieve human-level performance on reasoning or intellectual tasks, any more than a submarine needs to swim like a fish to move through water.
"it doesn’t solve fundamental challenges like common sense reasoning, causal understanding, or autonomous goal-setting." And yet when benchmarks to assess such capabilities, like ARC-AGI, AIME 2024 are set up, the AI reasoning models show significant progress on them.
The concrete evidence that AGI is coming soon is based on the progress of AI at the variety of tasks that define what we mean by AGI. That's why it's more a matter of definition. If we defined it by the Turing Test, we'd be there. Define it strictly and yes, you might argue it's 10+ years out. But the "AI that does most virtual work tasks at a median human level" seems to be achievable by scaling along the 3 dimensions of capability I mention.
Altman's 3 points include two points about AI scaling progress - the log scaling laws and massive drop in price of inference - that are already well-understood, backed by data and stated by myself over a year ago, so it's not surprising I'd echo them. it's not blind agreement but backed by data. His third point wrt value of higher-level AI I quibbled with and I'd like quantification.
In my opinion, which I expressed as early as march 2023, is that AI can be very transformative, valuable and disruptive even without AGI. It doesn't even require AGI to replace 20% or even 50% of the most mundane tasks in our economy and massively boost productivity. There's a lot of automation possible just with the AI video recognition, AI customer support bots, AI research assistant tools, AI coding copilots, AI writing tools, etc. As AI advances, it expands. I think lower-end tasks are more important to our economy than ASI capabilities because they are so much more common.
So, as I've stated before on my blog, reaching AGI is less important than assessing "what can AI do practically for me now and in near future?"
Finally, I do recognize - and have noted - that Altman and OpenAI, and all tech leaders, engage in hype. I aim to avoid cheerleading and hype, instead being on-the-level and fact-based. I've fallen for fake AI hype a few times (e.g. Devin and Rabbit), so I'm wary and will take the point in consideration. FWIW, I don't take what Altman says at face value, and filter his claims, but in this case, IMHO he assesses the state of things accurately. You have every right to have a different opinion, and we'll see how it shakes out.
Your response equates human thinking to AI pattern-matching, but this is a false equivalence. While humans do rely on pattern recognition, they also engage in causal reasoning, abstraction, and intuitive decision-making—things AI fundamentally lacks. A submarine and a fish both move through water, but intelligence isn’t a question of movement; it’s a question of how an entity understands and interacts with the world. AI may outperform humans in structured tasks, but that doesn’t mean it understands what it’s doing.
You also shift the goalposts on AGI by treating it as a matter of definition. If we redefine AGI to mean "AI that does most virtual work tasks," then sure, we might be close. But that’s not how AGI has historically been understood—AGI implies a machine with broadly human-like intelligence across diverse, open-ended domains. Simply excelling at curated benchmarks does not prove true general intelligence. These tests measure narrow competencies, not the ability to generalize, adapt, or autonomously set goals outside of predefined tasks.
Furthermore, pointing to AI’s rapid scaling and cost reductions as evidence that AGI is near is speculative. Past AI progress has often followed boom-and-bust cycles, with scaling eventually hitting limits that require paradigm shifts. We’ve seen immense progress in specific AI applications, but none of it guarantees that we’re just a few tweaks away from AGI.
Your argument also misframes the debate by focusing on AI's current usefulness. Nobody denies that AI has real-world applications, but that’s not the point in question. The issue is whether AGI is imminent, and there’s no solid evidence for that, only extrapolation from trends that may not hold indefinitely.
Finally, while you claim to be objective and skeptical of AI hype, your response largely echoes Altman’s narrative. Recognizing that tech leaders exaggerate is one thing, but continuing to rely on their arguments without deeper scrutiny suggests an implicit bias. True skepticism would demand stronger evidence than trend-based extrapolation before concluding that AGI is just around the corner.
Just curious, how deep is your actual hands-on experience with AI development?
Wrt to my experience in AI, I have a PhD in Computer Science and 35+ years experience in EDA (electronic design automation) and AI/ML/NLP, in various roles including as EDA project lead, ML engineer, technical manager, CTO, etc. In recent years, I've been writing about AI - here - while also using LLMs and AI tools/apps and building AI tools with AI frameworks.