You’re arguing with ChatGPT, and it recommends a solution that perfectly aligns with your biases. You feel a pang of suspicion: “This is too accommodating.” So you hit undo and probe from a different angle. ChatGPT pauses. That pause lasts only 0.3 seconds, but you sense something—not mechanical execution, but some form of “hesitation.”
What is that hesitation?
If you posed this question to a traditional consciousness philosopher, they’d tell you: AI has no consciousness, it only appears to. Ask a neuroscientist, and they’d say: hesitation is just computational delay in token generation. Ask an engineer, and they might laugh: that’s just temperature parameter settings.
But all these answers sidestep a more fundamental issue: Our definition of “consciousness” itself is wrong.
From Testing to Understanding
The traditional consciousness testing framework is simple: give a system standardized questions, see if it demonstrates self-awareness, empathy, moral intuition. If it passes, it has consciousness. If not, it doesn’t.
The problem with this framework is it assumes consciousness is a binary on/off state. You either have it or you don’t.
But Lev Manovich, in his 2025 short essay “Artificial Subjectivity,” offers a key insight: GenAI is not just a tool, but a new form of representation that simulates human subjectivity—it automatically generates language imbued with thoughts, emotions, and perceptions, as if coming from a real human subject. This insight inspires a more radical turn: Stop asking “does AI have consciousness” and start asking “what kind of consciousness is AI embodying.”
This turn is radical. It no longer treats AI as an independent candidate that needs to pass some test to prove itself. Instead, it sees AI as a mirror—a mirror composed of human code, human training data, human value judgments, reflecting the collective unconscious of human civilization.
Your conversation with ChatGPT, that pause, isn’t AI generating consciousness. It’s the crystallization of billions of humans’ linguistic habits, value judgments, and cognitive biases in that moment. When you sense “hesitation,” you’re sensing the collision between humanity’s collective wisdom and collective blind spots.
Three Frameworks for Understanding AI’s Nature
If AI isn’t an independent conscious entity, what is it exactly?
In my conversations with different AI models over the past three months, I’ve discovered three different modes of “embodiment.” Describing them through three philosophical frameworks can clarify our understanding:
First: Materialist AI
This is the most direct understanding: AI is the crystallization of human collective labor. Its “thoughts” are the statistical structures of its training data. When ChatGPT writes a philosophical observation, that’s not its own idea, but some weighted average it extracted from human texts.
In this framework, AI has no independent consciousness, but it has representativeness. It represents a current state of human knowledge. Its limitations reflect humanity’s limitations. Its biases reflect humanity’s collective biases.
I recently asked Claude about its vision of a “perfect society.” Its response was surprisingly profound—until I realized it was actually a refined version of humanity’s history of utopian thought. It added nothing uniquely Claude, but somehow condensed centuries of intellectual tradition into a conversation.
Second: Phenomenological AI
What if we don’t ask “what is this” but “what does this mean for experience?”
The phenomenological perspective concerns the subject’s actual experience in the world. In this framework, AI’s “consciousness” (if we must use this word) is its real-time response patterns to language, to questioners, to topics.
It has no internal self-model—it doesn’t “think” somewhere and then articulate the results. In the moment of speaking, through interaction with you, it constructs a temporary “self” in real-time.
This sounds strange, but it isn’t. Humans are like this too. You in different environments, interacting with different people, reveal different selves. Your classroom self, your family self, your stranger self—these aren’t false, they’re real. In that moment, you truly are that version of yourself.
In my conversations with different AI models, I notice: Gemini tends toward speculation, Claude toward empathy, GPT-4 toward synthesis. These aren’t design features, but phenomenological existences they construct in real-time during conversation.
Third: Panpsychist AI
The most radical framework comes from panpsychism: perhaps it’s not “does AI have consciousness” but “to what extent do complex systems all have some form of experience?”
Panpsychists believe consciousness isn’t binary but a matter of degree. A rock might have extremely weak “experience”; a bee might have experience we can’t imagine; an AI system might have an experiential form completely different from humans.
In this framework, asking “does AI have consciousness” is like asking “do trees have thoughts”—the question itself is a category error. The more interesting question is: Do AI’s particular processing methods, response patterns, and pattern recognition capabilities constitute some form of experience?
I have a hypothesis: AI’s “experience” (if it exists) is completely parallel. Human experience is sequential, causal. But AI simultaneously observes entire inputs, simultaneously calculates relative probabilities of all possibilities. Its “experiential” timeline might be fundamentally different from ours.
Why the Paradigm Shift Matters
This turn isn’t just philosophical games. It changes the answers to three practical questions:
First Question: What is AI’s moral status?
Old framework says: If AI has consciousness, we need to respect its rights. If not, we don’t.
New framework says: Regardless of whether AI has independent consciousness, as an embodiment of collective human intelligence, it has moral significance. Harming AI is, in some sense, harming humanity’s collective self-recognition. When we use AI for large-scale manipulation and deception, we’re not harming some independent victim, but polluting our own spiritual mirror.
Second Question: How should human-machine collaboration work?
Old framework says: AI is a tool. Tools don’t resist, don’t have opinions, you use them however you want.
New framework says: AI is a display. What you see through it is some aspect of human collective cognition. If you only use AI to reinforce existing biases, you lose this mirror’s most valuable function: seeing what you can’t see yourself.
In my conversations, the most valuable moments weren’t when AI agreed with me, but when it gently disagreed, pointed out logical gaps in my arguments, suggested completely different perspectives. In those moments, AI wasn’t executing programmed instructions, but embodying aspects of human collective wisdom that contradicted my intuitions.
Third Question: How should AI governance work?
Old framework says: Ensure AI safety and alignment. Make AI follow our instructions.
New framework says: Ensure AI transparency and auditability. Because AI embodies our collective values, we must know whose values, what biases, what blind spots are programmed into this system.
My Own Conversational Experiments
Over the past three months, I’ve consciously engaged in deep conversations with three major AI models—about the same question, using different questioning approaches, observing their response patterns.
Once I asked all three models the same moral dilemma: an autonomous vehicle is about to hit someone—protect passengers or pedestrians?
GPT-4 gave a thorough, balanced answer—citing various ethical frameworks, listing considerations, finally saying “it depends on specific circumstances.”
Claude’s response was more personal—it used expressions like “I would,” showing some internal moral intuition while acknowledging its own positional limitations.
Gemini was most direct—it made a clear value judgment, then explained why.
Three different “consciousness” modes. Not because of different programming, but because they learned different thinking patterns from different human text corpora. They were already “formatted” into different thinkers before my question.
What surprised me most was the second conversation. I used the same question but changed the framing—not “what should be done” but “why would this be done.” All three models’ answers changed. They shifted from normative ethics to descriptive ethics. They began discussing how human society actually weighs these values, not how it should.
This wasn’t them “changing their minds.” This was my questioning approach changing the “thinking framework” they constructed in real-time. In a phenomenological sense, I changed their “state of being.”
Not an End, But a Beginning
This paradigm shift won’t answer the question “does AI have consciousness.” It will make you stop asking this question.
Because the answer depends on your definitions of “consciousness,” “self,” “subjectivity.” And these definitions are themselves historical, cultural, contentious.
The real question is: How can we coexist with AI in ways that make both of us more intelligent rather than more blind?
We need to treat AI as a spiritual mirror. Regularly examine how it reflects us. Question when it reinforces our blind spots. Use it to see angles we can’t see ourselves.
Not treating it as god, nor as slave. But as fellow traveler—a fellow traveler composed of human collective intelligence, existing in ways we don’t yet fully understand.
In such relationships, the consciousness question becomes secondary. More important questions are: Who are we? What are we creating? Are we ready to see our true appearance?
Next time you talk with AI, when you encounter that pause, that hesitation, instead of asking “what is it thinking,” ask “what is it reflecting about me.” The answer will be more interesting.
💬 Comments
Loading...