Nine Months, One Word, A Collective Awakening

On February 2, 2025, Andrej Karpathy casually dropped a term on X: Vibe Coding.

The state he described was simple—complete immersion in the atmosphere, going with the flow, even forgetting you’re writing code. No rigorous definition, no academic framework, just two English words.

Nine months later, Collins Dictionary selected it as the word of the year.

By contrast: “cloud computing” took several years from its 2006 introduction to enter popular vocabulary. The transmission speed of Vibe Coding is itself a signal—it’s not describing a new technology, but naming a collective experience that has already occurred. When a word can spread at such velocity, it means it has touched something real that was waiting to be spoken.

A Decade of Archaeology by the Master of Naming

Karpathy isn’t just an AI researcher; he’s the most precise terminology forger of our era. Four words in ten years, each standing at a turning point in human-machine relationships.

In 2015, Karpathy popularized “hallucination” in the context of language models in his essay “The Unreasonable Effectiveness of Recurrent Neural Networks,” describing how language models generate URLs and content that appear reasonable but are completely fabricated (while “hallucination” in AI dates back to the 1980s, Karpathy’s usage made it the standard term for describing LLM-generated errors).

2017 brought Software 2.0. Traditional software is humans writing rules for machines to execute; Software 2.0 is humans feeding data for machines to learn rules themselves. This wasn’t just a switch in technical approach, but a fundamental redefinition of “who writes the programs.”

2025’s Vibe Coding shifted focus from “how AI learns” to “how humans use it.” Programmers no longer scrutinize every line of code, but converse with AI, iterate, and move forward when it feels right. Rigor yields to intuition, control yields to trust.

Then came 2026’s Claws. This term refers to a new layer above AI agents: orchestration, scheduling, context management, tool invocation, persistence. Karpathy used an intuitive metaphor—underlying large models are wheat harvested from the field, agents are ground flour, and Claws are baked bread, ready to use.

Mac Mini sales surged “like hot pancakes at a North American breakfast joint” due to demand for running local AI agents. This isn’t about server farms anymore; this is about what’s on your desk.

Talk is cheap → Code is cheap

On February 23, 2026, Simon Willison, co-founder of the Django framework, released a new project called Agentic Engineering Patterns, opening with a bombshell:

“Writing code is cheap now.”

If you’ve been in software circles, you know what this sentence is overturning. In 2000, Linus Torvalds uttered that phrase etched into countless engineers’ minds: “Talk is cheap. Show me the code.” Empty talk is cheap; show me the code. This sentence defined an entire generation’s engineering culture—code was a scarce resource, the ultimate carrier of value, and only those who could write it counted.

Twenty-five years later, Willison flipped it. Code is cheap. Show me the talk. Code has become cheap; show me how you describe requirements, how you make decisions.

This isn’t rhetorical play. Google Principal Engineer Yana Dogan said her team spent an entire year in 2025 building a distributed agent orchestrator that was completed in just one hour using Claude Code in 2026. Vercel CTO Malte Ubl, with Opus 4.5, completed two major open-source projects during vacation, started writing a book, and fixed numerous bugs—he said it would be “absolutely impossible without AI.”

A year’s worth of work compressed into one hour. This isn’t linear efficiency improvement; this is a phase transition in cost structure.

Things “Not Worth Doing” Suddenly Become Worth Doing

Most people hearing “AI makes writing code cheaper” first react with “great, faster delivery.” This understanding is correct but too shallow.

The real revolution isn’t in making existing work faster, but in making things that were “originally not worth doing” worth doing.

Every development team has an invisible list—features deemed “not cost-effective in terms of input-output ratio” that never make it to priority, improvements that “would be nice to have but cost too much to develop,” requirements that “serve too few users so we won’t do them.” When the marginal cost of writing code approaches zero, this entire list suddenly comes alive.

Willison’s advice is practical: whenever your instinct says “not worth the time,” try it with AI first. The worst outcome is wasting a few cents on tokens; the best outcome is discovering an opportunity that couldn’t have existed before.

But he honestly adds: “good code” remains expensive. Functional correctness, edge case handling, maintainability, test coverage, documentation quality—these quality standards haven’t decreased because of AI. What’s cheap is the first draft, not the finished product.

How Should Taiwan’s Knowledge Workers View This

The logic of “code becoming cheap” doesn’t only apply to software development. Replace “code” with any knowledge work output—drafts, reports, analyses, design mockups—the same cost structure change is occurring.

Writing: AI can quickly generate drafts, but judging which viewpoints deserve development and which paragraphs should be deleted still requires humans. Design: AI can produce a hundred proposals, but discerning which proposal truly solves user problems still requires humans. Data analysis: AI can run all models, but deciding to ask the right questions and questioning unreasonable conclusions still requires humans.

Former Uber engineer Gergely Orosz summarizes core capabilities in the AI era into three: judgment—distinguishing good from bad AI output; strategic thinking—knowing what to do, not how to do it; domain expertise—verifying the accuracy of AI-generated content.

For Taiwan, we have the world’s most concentrated tech talent, but also many job positions that rely on “execution capability” rather than “judgment.” When execution costs are compressed to near zero by AI, only judgment can create differentiated value. This isn’t a threat narrative, but certainly a structural recalibration.

Grandma Doesn’t Need to Know Apps Exist

When a netizen questioned Karpathy’s Claws concept, his reply was just one sentence:

“Grandma shouldn’t have to understand technical issues like how applications are deployed, because her AI assistant should know these things.”

This sentence is an endgame prophecy. The future isn’t teaching everyone to write code, but making AI the intermediary between people and systems. Users only need to express intent—“help me book a ticket to Tainan for tomorrow”—and AI automatically decides whether to invoke existing apps or instantly generate a customized solution. The concept of “applications” might disappear from users’ cognition.

From Vibe Coding to Claws, Karpathy used two words to trace a trajectory: first humans and AI write code together, then AI manages the entire system itself, with humans only responsible for stating what they want.

The question is no longer “can you write code?” The question is “do you know what problems to solve?”

And that question was never cheap.