Jensen Huang said: “If I were a student today, I would learn AI.”

This statement ignited a wave of discussion in Taiwan in 2025. Some treated it as career gospel—“Quick, sign up for AI courses!” Others scoffed—“Of course the GPU seller tells you to learn AI.” Some felt anxious—“I can’t code, am I doomed?”

But I believe most discussions stayed at the surface. The real weight of Jensen Huang’s statement isn’t in those three words “learn AI,” but in the three layers of structural warnings behind it.

First Layer: AI is a Mirror of Thinking

Jensen Huang’s own way of using AI is quite illuminating. He asks AI to “explain it like you’re talking to a 12-year-old”—especially when facing unfamiliar domains. In other words, he treats AI as a powerful but guidance-needing learning partner.

The logic behind this approach deserves unpacking. AI doesn’t guess your intentions. You must transform vague ideas into precise instructions. You must know exactly what you want—not just “that general feeling.”

This is why I say AI is a mirror of thinking.

When you collaborate with AI, what AI exposes isn’t its capability boundaries, but your thinking defects. If your questions are vague, AI’s answers will be vague. If your logic has holes, AI’s output will have holes. If you don’t know what problem you’re trying to solve, AI can’t help you either.

I’ve deeply experienced this in my own work. Every time I’m dissatisfied with AI’s output, looking back carefully, the problem almost always lies in my questioning—not that AI isn’t smart enough, but that I haven’t thought things through clearly.

So the first layer meaning of “learning AI” isn’t learning technology. It’s learning how to organize your thinking clearly. This is a fundamental skill of thinking that should be learned regardless of whether AI exists. It’s just that AI makes this need more urgent and inescapable.

Second Layer: AI Collaboration is the Entry Ticket

Jensen Huang’s second layer warning is more pragmatic: no matter what profession you study, you should ask one question—“Can AI help me do better?”

The lethality of this question lies in the fact that the answer is almost always “yes.”

Programming? AI can help you write drafts, find bugs, and refactor. Design? AI can help you generate sketches, explore color schemes, and build mockups. Research? AI can help you with literature reviews, organize data, and discover patterns. Marketing? AI can help you write copy, analyze audiences, and optimize campaigns.

Note: AI “helps you,” it doesn’t “replace you.” But the magnitude of this “help” is already large enough to change the rules of the game.

A designer who uses AI might have three to five times the output speed of one who doesn’t. A researcher who uses AI might have ten times the efficiency in literature reviews compared to traditional methods.

What does this mean? If your competitors use AI and you don’t, the gap between you isn’t 10% or 20%—it could be three to ten times.

This isn’t a “bonus point.” This is an entry ticket. Just like learning to type on computers thirty years ago, learning to search online twenty years ago, and learning to use smartphones ten years ago. You can choose not to learn, but you must be prepared to accept marginalization.

I discussed in “AI Never Sleeps: The Economic Orders Being Restructured” how AI is restructuring the operating logic of the entire economy. In this new logic, people who can’t collaborate with AI are like riding bicycles on highways—it’s not that you’re not working hard, it’s that you’re on a track that doesn’t belong to you.

Third Layer: The Fundamental Turn of Education

The third layer is the deepest and least discussed.

When AI can summarize a book in seconds, answer knowledge questions, and generate research reports—what is the function of schools?

If schools’ value lies primarily in “transmitting knowledge,” then AI already does it better than most teachers. Faster, more comprehensive, more patient, and available 24/7.

But schools’ value shouldn’t only be in transmitting knowledge. It should be in cultivating judgment—distinguishing what’s important and what’s not among massive information. It should be in cultivating questioning ability—knowing what questions to ask is more important than knowing answers. It should be in cultivating the ability to dialogue with intelligence—not accepting AI’s answers, but interacting with AI, questioning, correcting, and iterating.

I discussed in “Super Learners: The Learning Revolution in the AI Era” that learning isn’t a one-time event, but a continuously operating system. Education in the AI era needs to cultivate not “people who know many things,” but “people who know how to dialogue with intelligence.”

This is a fundamental turn. From teaching “answers” to teaching “questions.” From teaching “knowledge” to teaching “judgment.” From teaching “memory” to teaching “thinking.”

What We Should Really Worry About

Jensen Huang’s “learn AI” advice appears to be career planning talk on the surface. But unpacked, it touches on a deeper question: In a world where non-human intelligence grows increasingly powerful, where exactly lies human value?

I believe what we should really worry about isn’t AI replacing humans. AI currently replaces repetitive, rule-based, structurable work. This work should have been replaced long ago—just like washing machines replaced hand-washing clothes, and no one considers that a loss.

What we should really worry about is: humans voluntarily abandoning their judgment in the face of AI’s convenience and efficiency.

I explored AI’s “structural dishonesty” in “On Language, Truth, and Contradiction”—AI doesn’t intentionally lie, but its probabilistic model naturally produces seemingly plausible but potentially false content. If you accept every AI output without judgment, you’re not using a tool—you’re outsourcing your thinking to a statistical model.

Jensen Huang says learn AI. But the second half he didn’t say might be more important: Learn AI, but don’t hand your brain over to AI.

The future belongs to those who can walk alongside non-human intelligence while maintaining their own judgment. Not because they’re smarter than AI—they’re not. But because they know where AI’s boundaries are, know when to trust AI, when to question it, and when to turn it off and think for themselves.

No AI course can teach you this ability. You need to practice it yourself.