The AI Skill Issue
Karpathy's lament exposed a common feeling among many. We are feeling not the AGI, but the FOMO that we might not be getting what we should out of AI. It's a skill issue, but we can fix it.
Karpathy Identifies the AI Skill Issue
I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. – Andrei Karpathy
AI pioneer Andrei Karpathy, a former AI researcher at OpenAI and Tesla, has a knack for leading the public conversation around AI. He coined the “Vibe Coding” term earlier in 2025 to encapsulate how users can let AI lead in crafting software. Now his latest X post reads like a lament: “I’ve never felt this much behind as a programmer.”
Karpathy sees that software development is being dramatically refactored by AI, but AI is implemented as another abstraction sitting on top of traditional software stacks (languages, libraries and IDEs). Developers now must reason about prompts, contexts, agents, subagents, memory, tools, plugins, skills, MCP, LSP, workflows, and IDE integrations.
Developers who want to be AI-enhanced need a new set of AI prompting and AI agent management skills to make it work. At the same time, losing track of the old skill-sets of languages and software principles leaves them vulnerable as well. If a vibe coder lets AI make all the decisions, but the product has flaws or bugs, they could find themselves with broken code they don’t understand and cannot fix.
Karpathy exposed the reality that even experts are scrambling to internalize these AI tools. Those that do it well will be the 10x engineers, but others who don’t make that leap risk being left behind.
Feeling the AI Productivity FOMO
Karpathy’s post tapped into a widespread anxiety many of us have about AI adoption and use. We have difficulty keeping up with rapidly evolving AI tools and workflows. AI can do so much, but there is a gap between what AI can do and what we are getting out of AI. Each new release widens that gap until we absorb and use it.
Many working engineers and data scientists have responded with relief, seeing that they are not alone in their own insecurity about AI disruption. Even Boris Cherny, the creator of Claude Code, stated:
I feel this way most weeks tbh. Sometimes I start approaching a problem manually and have to remind myself “Claude can probably do this.”
However, he followed it up with saying he used Opus 4.5 and Claude Code to write 200 PRs in the past month, a phenomenal productivity level.
The gap is the majority who are not achieving anywhere close to that. Demos showing AI creating apps in minutes are deceptive. It’s easy to get the latest AI models to one-shot prompt a generic app these days, but it’s much harder to craft exactly what you want. The new AI-enabled development flow now involves orchestrating AI agents, delegating tasks, and reviewing probabilistic outputs.
Professionals are being forced to rethink what “programming” means when a large share of code can be drafted or transformed by AI assistants. Figuring this all out in an environment where AI models, tools and paradigms are shifting creates both cognitive overload and fear of obsolescence.
While software engineering has been the tip of the spear for agentic AI, every knowledge worker will be hit with some variation of this challenge in coming years. An AI that can craft thousands of lines of coding autonomously can attempt most knowledge work tasks, from legal briefs to tax accounting analysis to sales brochures.
Interpreting the Challenge
Some reactions interpreted the Karpathy discussion on AI skill issue as a signal of a breakpoint in software development. Experienced developers with decades of engineering practice optimized for stable APIs and deterministic behavior are trying to handle AI systems that are probabilistic, fast-changing, and opaque.
This seems to be made worse according to Sebastain Raschka by “trying to do too much” instead of a deeper focus. Says Centrox AI:
It’s essentially cognitive load saturation, context-switching across multiple stacks or research domains fragments attention and limits expertise consolidation.
The challenge as of late is the rate of change is the highest it’s been in recent memory so picking one and going deep while trying to keep a lens on all the change is just hard.
AI Surprises Equals Learning
I have felt AI productivity FOMO personally. Each new AI release turns prompts and context approaches that were previously ideal into potentially inferior practices. When I don’t get a good result out of AI, I need to check if it’s the AI or my failure to prompt correctly.
Sometimes I am surprised in a good way. For example, Gemini 3 Pro Deep Research recently produced remarkably good marketing strategy reports on an extremely narrow personalized topic. It was specific, insightful, and with little verbal fluff, and it went far deeper than what I got from it before. Now I know I can push this AI tool harder.
We have to remind ourselves it’s okay if AI surprises us, just for a while. When AI surprises us, good or bad, that is what learning feels like.
Is 10X Productivity with AI an Illusion?
Some skeptical reactions have questioned the implications of the post, that it risks overstating the near-term benefits of AI. From this viewpoint, the “10x” productivity gains are an illusion and the FOMO is just a reflection that nobody is meeting up with reality.
They have some valid points: AI can be used to produce low-quality code and spam. Enthusiasm for automated development can collide with craftsmanship in software engineering and leave some people in a rut.
However, the counterpoint is that these are examples of misuse of AI. It implies the only use of AI are the negative cases.
One example of the positive uses of AI is Peter Steinberger announcing that “I ship code I never read.” In a blog post titled “Shipping at Inference Speed,” he shares his 2025 workflow; it is based on GPT-5.2 Codex right now, and it’s a marvel of software developer productivity.
The takeaway: While AI can be abused, AI can also be used to achieve 10-fold developer productivity. The examples are out there. As Karpathy said, if you are not at 10x with AI, it’s a skill issue.
The Technical Underpinnings of the Shift
Some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. – Andrei Karpathy
Every technology goes through a technology adoption curve of several phases: Invention, productization, adoption, and finally integration.
The first phase is technology research. The outcome is invention of the core technology. Its metrics are raw capabilities, which for AI has been the AI’s intelligence benchmarks. Then comes technology product development, the productization of the technology, where the metrics are functional performance in the real-world and user experience.
AI in 2025 brought us both huge advances in AI intelligence levels and big improvements in AI productization. The build-out of a better ChatGPT, better AI-first tools like NotebookLM, Cursor and Claude Code are examples of how AI products have advanced and AI user experience has improved.
We are also ramping up in the technology adoption phase. Usage of AI exploded in 2025. The outcome is beneficial utilization; the metrics to measure it are positive business and personal impact.
The final phase, which will take years to mature, is technology integration. This is where we learn to co-develop better procedures, processes and productivity hacks to make the most out of AI.
The AI skills gap is that gap between raw AI capabilities, i.e., AI intelligence, and our ability to use AI beneficially to its full extent. We feel the AI FOMO because we haven’t yet translated AI’s full potential into large productivity improvements for ourselves.
How to Manage the AI Skills Gap
There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. … Roll up your sleeves to not fall behind. – Andrei Karpathy
My advice to anyone wanting to understand and benefit from AI is simple: Use AI. Using AI isn’t enough though. You need to grasp how to use AI, by experimenting with AI and learning what works and doesn’t work for your tasks.
With AI development on a compressed timeline, however, learning AI is like hitting a moving target. AI is far better than it was just 6 months ago, and what works now might be different from what worked then. The AI productivity gap is a learning curve gap.
Feeling behind in the use of AI reflects this transition period where AI technology keeps forging ahead of our adoption. The AI keeps getting better at agentic tasks, while many are still prompting AI like a chatbot.
The solution: Climb the AI learning curve on the same compressed timeline that AI is on.
Knowledge workers using AI must learn to split work tasks between humans and AI, articulate problems clearly, and manage AI-driven workflows. Avoid full automation that costs you control of the final product but use AI as a powerful but unpredictable junior collaborator, managing AI by developing skills in prompt design, workflow design, and error handling.
One consistent pattern for success is a “trust but verify” approach where clear standards are set for AI outputs, and AI results are checked and verified by AI and humans to assure correctness.
Between a moving AI capability frontier target and many new concepts, it’s a lot to handle. Focus on a few core AI tasks and capabilities at a time; get good at a few AI tasks that you reuse a lot. Balance exploring new ideas and exploiting the workflows that you’ve gotten to work well for you.
Finally, well-designed AI UX (AI user experience) is a key enabler of useful AI to get that 10-fold productivity. As AI applications are built for utility and ease-of-use, it will get easier to win with AI.
The focus of AI application development is shifting from AI model intelligence to broader aspects such as user experience. The AI-first applications that win might not be the ones with the smartest AI (many AI models are getting ‘good enough’). Rather, AI systems that combine AI intelligence with ideal interfaces for the best user experience and productive use will be most successful.
Software engineering is radically changing, and the hardest part even for early adopters and practitioners like us is to continue to re-adjust our expectations. And this is *still* just the beginning. – Boris Cherny, Anthropic AI
Postscript
As I climb that AI learning curve myself, I have a 2026 resolution to focus “AI Changes Everything” on content that helps you learn to use AI better. Stay tuned for more!


