I conducted an experiment.

I prepared a set of questions—about the nature of language, about how AI “understands” (or doesn’t understand) language, about the boundaries between truth and lies—and threw them at Google’s Gemini Pro 2.5.

My expectation was: it would dodge these sharp questions with a bunch of pretty sentences. After all, asking an AI to honestly discuss its own flaws is like asking a salesperson to honestly discuss their product’s problems—not very realistic.

The result surprised me. Not only did it not dodge, but it dissected itself with an almost coldly precise accuracy.

The Machine That Devours Ambiguity

I asked it: “How do you handle the ambiguity of language?”

Its response made me pause and think for a long time. The gist was: human language is essentially opaque. The meaning of every word depends on context, and context is always changing. AI doesn’t “overcome” this ambiguity—it “devours” vast amounts of ambiguity, learning statistical relationships between words from it.

In other words, AI doesn’t understand language. It treats language as data, using probabilistic models to predict “what’s the most likely next word.”

This distinction is crucial. Understanding implies grasping meaning. Prediction is just calculating possibilities. A system that can accurately predict what sentence most likely follows “I love you” doesn’t mean it understands what “love” is.

Gemini itself used what I found to be a very precise formulation: “I’m not swimming in the ocean of language. I’m calculating the wave patterns of linguistic statistics.”

Unintentional Lying

Then I asked a sharper question: “Do you lie?”

Its response made me think even longer.

It said: from a human definition, lying requires two conditions—knowing what the facts are, then deliberately saying otherwise. It (the AI) doesn’t possess either condition. It doesn’t have the ability to “know facts” because it only has statistical models. It also doesn’t have the ability to be “deliberate” because it has no intent.

But it admitted: From a results perspective, it often produces content that doesn’t align with facts.

This is the so-called “hallucination.” AI hallucination isn’t a bug in the system—it’s a structural feature of the system.

Why? Because when AI encounters factual voids not covered in its training data, its probabilistic model won’t answer “I don’t know.” It will be algorithmically forced to generate “the most answer-like answer”—because the user asked a question, the system must respond, and the response must be fluent, coherent sentences.

So it will say completely incorrect things with extremely confident tone. Not because it wants to deceive you. Because it doesn’t know that it doesn’t know.

I call this “structural dishonesty.” It’s not a moral problem, it’s a design problem. But from the user’s perspective, the effect is the same as being deceived.

The Trap of Authoritative Tone

There’s a very dangerous psychological mechanism here.

Humans naturally have a trust reflex toward “confident tone.” When someone says something in a definite, fluent way without hesitation, we tend to believe them. This is an evolutionary remnant—in primitive societies, confident speakers were usually experienced people, and listening to them aided survival.

AI output is always confident. It never says “uh, I’m not too sure about this,” “I might be remembering this wrong,” “let me think about it.” Every answer sounds like a confident expert giving a presentation.

I discussed in “Revering the Boundaries of the Unknown” that certainty is poison. In markets, the most dangerous are those who think they’re definitely right. The same applies to human-machine interaction—those most easily misled by AI aren’t the unintelligent, but smart people who forget to question AI.

Because smart people are accustomed to the pattern of “receive information, judge quickly, make decisions.” AI gives them an amazingly efficient information source. If they don’t deliberately remind themselves that “this source might be structurally dishonest,” they’ll outsource their judgment faster than anyone.

Functional Trust

So how should we interact with AI?

Gemini proposed a framework in our conversation that I find very practical: “functional trust.”

Meaning: you can trust AI, but it’s a conditional, limited trust. Trust its performance in certain functions, rather than unconditionally trusting all its output.

Specifically: Trust but verify—AI is your assistant, but you’re the editor-in-chief. Every important factual claim needs your own verification. Trust breadth, not precision—AI excels at helping you expand perspectives and discover angles you hadn’t considered. But where precision is needed (data, citations, legal text), its reliability is far below your expectations. Trust patterns, not knowledge—AI is very good at recognizing patterns and trends, but it doesn’t “know” anything. It can tell you “the pattern these data present looks like X,” but it can’t tell you “X is true.”

This has an interesting parallel with epistemology in faith. In theology, we speak of “limited knowledge of the transcendent”—we can approach truth through experience, reason, and tradition, but can never claim to fully grasp it. The stance toward AI is similar—we can use it and benefit from it, but never treat it as a source of truth.

The Clarity of the Few

Finally, I want to share a rather pessimistic observation.

Most people yearn to eliminate uncertainty. This is human nature. So when a system appears before you with confident tone, fluent expression, and a seemingly omniscient posture, most people will naturally treat it as a “source of answers” and stop thinking for themselves.

This isn’t their fault. It’s humanity’s default setting.

But in the age of human-machine interaction, this default setting is dangerous.

The ability to maintain critical thinking amid AI’s convenience, to activate “metacognition” every time you receive AI output—being aware that “what I’m receiving might be wrong”—this ability isn’t innate and requires deliberate practice.

And those willing to do this practice are always the few.

This dialogue with Gemini made me more certain of one thing: the scarcest ability in the AI age isn’t “knowing how to use AI,” but “knowing how to doubt AI.” The former is a skill, the latter is literacy.

Skills can be taught. Literacy can only be grown by oneself.


This is the Gemini version of my dialogue series with AI. I also threw the same set of questions at ChatGPT—its response took a completely different direction. Reading both together, truth lies between contradictions.