After using MOVES and LARK for six months, one day I opened my phone to see a notification: “Your deep sleep last night was only 47 minutes, 23% below your monthly average. We recommend reducing caffeine intake today.”

I was stunned. Not because the recommendation was so surprising—but because I had no idea my deep sleep had been only 47 minutes. All I knew was that I felt a bit tired when I woke up, but without that notification, I probably would have grabbed my usual coffee and headed out the door.

An app knew my body better than I did.

This made me start thinking about a question: When a system understands you better than you understand yourself, where exactly are “you”?

The Quantified Life

Let me list what these apps already knew about me at that time:

When I went to sleep and woke up each day, and how my sleep cycles were distributed. How many steps I took daily, the scope of my activities, and which places I frequented. Activity tracking that previously required deliberate planning was now being continuously collected in a very natural, almost painless way.

If you add social media data to the mix, the system also knew what issues I’d been focusing on recently, who I interacted with most frequently, when I was most likely to post, and what types of content were most likely to catch my attention.

In the future, if there were an app that could analyze where I had meetings, who I interacted with, how long each meeting lasted, and how my productivity changed afterward—it wouldn’t be surprising at all. In fact, by 2026, such tools already exist.

All this data converges to form “THE ONE”—a comprehensive system covering your physiological, behavioral, social, and cognitive aspects. Your digital footprint is the raw material that feeds this system.

Algorithms Know You Better Than You Do

This brings several harsh realities.

I might not know my true work efficiency, but the system does. It can calculate how many words I produce in a week, how many hours of meetings I attend, how long my periods of focused work last. I subjectively feel “very busy,” but the data might tell me: you only spend 30% of your time doing truly productive work; the rest is spent switching tasks and responding to messages.

My understanding of my own sleep and health conditions likely pales in comparison to wearable device data. I think I’m sleeping okay, but my Apple Watch says my heart rate variability is declining, indicating stress is accumulating.

My assessment of my own social influence is definitely inferior to the data platforms are already calculating for me—weekly reach, engagement rates, follower growth curves.

In other words: The system has already constructed a “you” that’s more accurate than your self-awareness.

Is this a good thing or a bad thing?

The Tension Between Two Generations

For digital natives, feeding digital footprints to systems is natural. They grew up on social media, sharing daily life, tracking data, and letting algorithms recommend content—it’s as natural as breathing. “Privacy” to them isn’t something that needs protection, but something that can be exchanged—trading some personal data for better services seems reasonable.

But for someone like me who once lived in the “pre-internet era,” this creates enormous tension. I remember a world without smartphones. I remember days when going out didn’t mean being tracked by GPS, when socializing didn’t mean being sorted by algorithms, when sleeping didn’t mean being quantified by sensors.

We live in a space-time of paradigm intersection. The old paradigm says: your life is private, you have the right to decide what is seen and what isn’t. The new paradigm says: your life is data, and data only has value when shared.

The conflict between these two paradigms isn’t just a technical issue—it’s a philosophical debate about “what it means to be human.”

Redefining Privacy

What is privacy?

The traditional definition is: the right not to be known by others. Your diary is private, your medical records are private, what you do at home is private.

But in the age of digital footprints, this definition is no longer sufficient. Because much of your “private information” is unknown to you yourself. You don’t know your sleep patterns, don’t know your attention distribution, don’t know the psychological patterns behind your consumption behavior. But the system knows.

So the question becomes: Do you have privacy rights over information about you that you yourself don’t know?

This question sounds abstract, but it has very concrete consequences. If an insurance company uses your wearable device data to determine your premiums, do you agree? If an employer uses your digital activity patterns to assess your work commitment, do you accept it? If a dating platform uses your behavioral data to decide whose profiles you see, do you think that’s fair?

I explored the impact of algorithms on human behavior in “Facebook Algorithms and the Human Struggle.” But the digital footprint issue goes deeper—it doesn’t just influence your behavior; it redefines what “you” are.

The Boiling Frog

When I first wrote about this in 2017, I used the metaphor of a “boiling frog.” Nine years later, the water is very hot, but we’re still in the pot.

Not because we don’t know the water is heating up. It’s because the cost of jumping out is too high—not use smartphones? Not use social media? Not use wearable devices? Not use any digital services? In today’s society, this is almost equivalent to withdrawing from civilization.

Moreover, the system does provide real value. My Apple Watch once alerted me to an abnormal heart rate, prompting me to get checked, and we discovered a condition that needed attention. Without that alert, I might have ignored it. AI-recommended articles have genuinely broadened my horizons. Navigation apps genuinely prevent me from getting lost in unfamiliar cities.

So the question isn’t “whether to use”—that choice has already been made. The question is: Under what conditions do we use it? Is the exchange ratio reasonable? Do we still retain the ability to say “no”?

The Stance of Coexisting with the Web

I don’t have answers. Really, I don’t.

But I have a stance: Use consciously, rather than being used numbly.

This means several things. Know what you’re giving up—every time you install an app, agree to privacy terms, what data are you handing over, to whom, for what purpose? Know what the system is doing to you—why is the algorithm recommending this content to you? What is it optimizing? What does it assume you want? Occasionally break patterns deliberately—intentionally search for things you wouldn’t search for, read viewpoints you wouldn’t read, go places you wouldn’t go. Confuse the system a bit.

We may not be able to escape from this web. But we can at least choose: inside the web, are we conscious beings, or passive data sources?

I feel like a frog living in a pot of gradually heating water. Somewhat accustomed. Yet finding it somewhat strange. But at least, I know the water is heating up. And knowing is the starting point of resistance.