There’s an imagination that goes like this:
If we could analyze all technical patterns, systems could automatically issue buy and sell signals. If we could analyze users’ age and assets, systems could automatically recommend the most suitable insurance products. If we could collect enough behavioral data, algorithms could predict what you’ll do next.
“Systematizing decisions”—this has been the fundamental faith of countless engineering-background startup teams over the past decade.
But this is less logic than religion. They are disciples of scientism.
The Unbridgeable Chasm
From data organization to making decisions isn’t a continuous spectrum, but a belt-like process with a cliff in between.
Let me break down this process. Data exists in three forms: structured, semi-structured, and unstructured. Structured data is numbers in spreadsheets—revenue, user counts, conversion rates. Semi-structured data is information with some format that requires interpretation—customer feedback emails, meeting notes, market reports. Unstructured data is the stuff floating in the air—a glance, an unspoken word, a gut feeling that “this direction doesn’t seem right.”
What platforms and algorithms can handle perfectly extends basically only to structured data. Give it a table, and it can help you sort, filter, find outliers, and plot trend charts. This is useful, but it’s still far from “decision-making.”
Because decision-making isn’t the sum of data compilation. Decision-making is a subjective cognitive leap.
You’ve read ten market reports, all data pointing toward direction A. But at yesterday’s dinner, you heard an industry veteran casually mention something that made you vaguely feel direction B was right. You can’t quantify the weight of that comment, can’t even explain to your team why you “feel” B is better. But you made the decision in that instant.
What happens in that instant is something algorithms can’t see or simulate.
The Faith Structure of Scientism
I call the excessive belief that algorithms can replace human decision-making “scientism”—not to insult rational thinking, but because its structure bears striking similarity to religious faith.
The core of religious faith is: there’s a transcendent existence (God, fate, karmic law) that can eliminate uncertainty for you. You just need to believe, and you have answers.
The core of scientism is: there’s a transcendent system (big data, AI, algorithms) that can eliminate uncertainty for you. You just need to feed it enough data, and you have optimal solutions.
The shared psychological driver is identical: escaping the pain of decision-making.
Making decisions is painful. That in-the-moment ambiguity, ambivalence, uncertainty is a weight each person must face alone. Career choices, investment judgments, romantic decisions—these are all non-structured subjective judgments, carrying human warmth, bias, and arbitrariness.
Scientism disciples think algorithms can shorten the vast chasm between semi-structured and unstructured. But the chasm doesn’t shrink. It’s just that because they “believe,” they feel the problem has disappeared.
A 2017 Warning, More Worth Hearing in 2026
I first wrote these thoughts in 2017. Back then AI didn’t have ChatGPT’s aura, big data was the trendiest term, and every entrepreneur was saying “data-driven.”
Nine years later today, AI’s capabilities have indeed made qualitative leaps. GPT can write articles, Claude can analyze contracts, various AI agents can automatically execute complex tasks. But the core problem hasn’t disappeared—it’s become sharper.
Because as AI increasingly resembles “something that can make decisions,” people more easily push decision-making responsibility onto it.
In running my company, I’ve seen too many situations like this. Teams use AI for market analysis, AI says a certain market opportunity scores highest, so they decide to go in that direction. No one asks: “Who designed AI’s scoring model? Are the scoring weights reasonable? Are there factors the model can’t see?” This isn’t AI’s fault—it faithfully produces results based on the framework you give it. The problem is people abandoning responsibility for thinking whether the framework itself is correct.
This is the same issue as the taste problem I discussed in “Thinking in the Post-Code Era: When Taste Becomes Humanity’s Key Competitive Advantage.” AI can find optimal solutions within coordinate systems you define, but defining the coordinate system itself—what is “good,” what’s worth pursuing, what risks are acceptable—these remain human responsibilities.
The Real Role of Information Platforms
Let me be clear: I’m not anti-technology. Quite the opposite—I use AI every day.
But I’m very clear about AI’s role in my decision-making process: it’s a “decision-support tool,” not a “decision-dependency system.”
This distinction is crucial.
Decision support means: AI helps me organize structured data, preliminarily categorize semi-structured information, and provide options and analysis within frameworks I set. Then I take responsibility for that final “leap.”
Decision dependency means: AI tells me what to do, I comply. If results are bad, it’s AI’s fault.
If platforms claim they can help you make better decisions and judgments—use this feature to buy stocks that will rise more, use that model to select the most suitable employees—such platforms, I said nine years ago they’re either foolish or fraudulent, and my judgment hasn’t changed today.
Not because the technology isn’t good enough. It’s because the essence of decision-making simply isn’t an optimization problem. It’s a process of making choices under incomplete information while carrying value judgments. You can have better information, but you can’t eliminate the weight of “choosing” itself.
Understanding Decision-Making Pain Through Theology
I learned a concept during theological training that I later found extremely helpful for understanding decision-making: finitude.
Christian theology has a core premise: humans are finite beings. Your knowledge is limited, your perspective limited, your understanding limited. This isn’t a flaw—it’s a basic condition of existence. Accepting finitude isn’t giving up on pursuing better, but acknowledging the fact that “you can’t know everything,” then making the most responsible judgment you can on that foundation.
Scientism’s problem lies precisely in its refusal to accept finitude. It assumes that with enough data and good enough models, you can approach “omniscience.” But omniscience is God’s attribute, not humanity’s. Projecting this attribute onto algorithms is essentially a form of idolatry—wrapping the desire for certainty in technological clothing.
I further explored this problem in “Algorithms as Judges”: When we let algorithms judge human worth and allocate human opportunities, what exactly are we trusting?
The pain of decision-making won’t disappear because of better tools. Tools can let you see more, calculate faster, simulate more scenarios. But that final moment of “I’ve decided” remains your solitary affair.
Prerequisites for Using AI Well
Back to practical matters. If you’re an executive using AI to assist decisions, I suggest posting this sentence next to your screen:
AI’s output quality will never exceed the quality of problems you input.
AI won’t tell you you’re asking the wrong question. It will only faithfully answer the question you ask, no matter how ridiculous that question might be. So your responsibility isn’t learning to use AI, but learning to ask the right questions. And asking the right questions requires deep domain understanding, sensitivity to human nature, and the humility to admit you might be wrong.
Thank goodness so many people in the world are willing to believe algorithms can solve everything. Their existence gives those who truly understand the nature of decision-making an irreplicable competitive advantage.
Because in a world where everyone has the same AI tools, the source of differentiation comes down to one thing: how you use it, and whether you dare choose B based on your judgment when AI says A.
There’s no well-tempered clavier. No universal happiness equation. The weight of decisions is the weight of being alive.
💬 Comments
Loading...