“Once, humans created tools. Now, tools are beginning to co-create humanity.”
The Boundaries of Understanding
Unpredictability was once the exclusive domain of nature and the human psyche. And yet, it is increasingly present in the technological world, especially in the case of AI, which learns, adapts, and makes decisions. Paradoxically, the more “intelligent” AI becomes, the less we understand the mechanisms driving it.
Today, many view AI as more than a tool, it is becoming a myth of our time. A myth promising salvation through automation but also carrying the specter of lost control.
When AI Becomes Unpredictable
AI is not just ChatGPT, Midjourney, or Siri. It also includes algorithms that decide who gets a loan, who is flagged in a crowd by facial recognition, and who ends up in prison through predictive policing.
With deep learning, we deal with so-called “black boxes”, models whose decisions we can’t explain. This isn’t a bug, it’s a fundamental lack of transparency.
“If we can’t understand how a system works, how can we be sure it works well?”, a question increasingly posed in AI Ethics and Explainable AI.
In my earlier post, “The Future of AI in 2025: What’s Ahead?” I discussed AI’s growing impact on medicine, law, and education. In this context, unpredictability isn’t an academic issue anymore, it becomes existential.
Technology as Myth: Heidegger and the Illusion of Control
In his famous essay “The Question Concerning Technology,” Martin Heidegger wrote that technology is not just a collection of tools. It is a mode of revealing the world, one that can dominate how we think. In this view, humans are no longer masters of technology but extensions of it. He suggests that technology has become our modern “God from the machine.” Paraphrasing, we expect algorithms to guide us on how to live. Yet technology (in this case AI) has no awareness or intention only access to data.
“Technology won’t replace humans but it may reduce them to a set of statistics.”
In my post “Is AI Truly Intelligent?” I asked whether we attribute too much to algorithms—whether we anthropomorphize them because it’s easier. But this ease comes at a cost: we shift responsibility to something incapable of bearing it.
Automation and Alienation
AI’s unpredictability appears on several levels:
- Technical. We don’t know exactly how the model works. Algorithms trained on millions of parameters, such as language models or behavioral prediction systems, often operate in ways even their creators can’t fully explain.
- Decisional. We don’t know why the system made a specific choice. A user might receive a result “credit denied” or “applicant rejected” often without any insight into the reasoning. AI becomes an arbiter whose rulings are final and unchallengeable.
- Social. We don’t know who bears responsibility. Is it the programmer, the data provider, the end user, or the algorithm itself?
This leads to what I call algorithmic alienation, a state where humans, once agents and subjects, become passive recipients of machine-made decisions.
“In a world full of automation, decision-making becomes a privilege reserved for code.”
This alienation is especially dangerous because it’s often invisible. We’re not forced to obey AI as we comply willingly because it’s faster, simpler, more convenient.
A Choice That Isn’t a Choice
This phenomenon is particularly evident in recommendation engines. Netflix, YouTube, Amazon, TikTok suggest content we “should” enjoy. On one hand, this enhances comfort. On the other, it reduces cognitive diversity, narrows our worldview, and in extreme cases, leads to isolation: we begin to believe the world is as the algorithm shows it.
In my earlier article, “How the Invisible Web of Our Decisions Works”, I wrote: “Our choices are becoming more predictable to machines, yet we understand less about where they come from.”
The More You Know, the Less You Choose?
In the AI era, information is filtered, processed, and delivered in ways optimized for engagement and retention. It’s no coincidence that TikTok users spend significantly more time on the platform than those who actively seek content themselves.
So, the question becomes: is this still a choice, or already programmable consumption? Are we still users or participants in a simulation of decision-making?
A New Ethics of AI? Perhaps an Ethics of Uncertainty
Perhaps it’s time to acknowledge we don’t know where we’re headed. AI is teaching us something fundamental about ourselves: we are beings who fear uncertainty and yet cannot eliminate it.
“Knowledge gives power, but awareness of ignorance gives wisdom.”
Technology ethics today must embrace this paradox: we are building things we don’t fully understand, and yet we can’t stop. Maybe it’s not engineering but philosophy should serve as our compass.
AI as a Mirror
Let’s not demonize AI, but let’s not idealize it either. Artificial intelligence isn’t a savior or a monster. It’s a mirror. A mirror that increasingly reflects our desires, fears, biases, dreams of immortality, and obsession with control.
Like any mirror, AI can distort reality, especially when trained on our unconscious biases embedded in data. For example, when facial recognition performs worse on darker skin tones, it’s not that the AI is racist, it’s that the data it was trained on was biased. It only reflects who we are, without the filter of political correctness.
At the same time, AI can serve as a magnifying mirror by revealing patterns we hadn’t noticed: systemic injustice, flawed processes, sources of disinformation. This mirror can be used not just for narcissism, but for reflection. Technology always reveals humanity’s shadow. The only question is whether we dare to look.
In this sense, AI isn’t a question about technology, it’s a question about humanity. About what we choose, whom we empower, what we accelerate, and what we flatten. It’s a question of whether we can responsibly use a tool that learns faster than we do and has no values unless we embed them.
Don’t fear the mirror. Just don’t confuse its reflection with reality.
Related articles:
– Algorithms born of our prejudices
– Will algorithms commit war crimes?
– Machine, when will you learn to make love to me?
– Artificial Intelligence is a new electricity
References:
“Once, humans created tools. Now, tools are beginning to co-create humanity.”This is my original reflection inspired by the classics of the philosophy of technology. The sentence aligns with posthumanist thought and the philosophy of technology, which emphasize that technology not only serves humans but also shapes their identity, thinking, and culture. Similar ideas can be found in the works of: Marshall McLuhan: “First we shape our tools, and thereafter they shape us.” Martin Heidegger: technology as a “mode of revealing the world” that influences how we think and act. Kevin Kelly (What Technology Wants): technology evolves like an organism, and we are its co-creators a species dependent on tools.
“If we can’t understand how a system works, how can we be sure it works well?”This is a paraphrase of a widely cited question related to the so-called black box nature of AI. It appears in various forms in discussions on: Explainable AI (XAI) a field devoted to making AI models understandable to humans,Algorithmic ethics especially in debates around AI systems making decisions without transparent logic.
“In a world full of automation, decision-making becomes a privilege reserved for code.” This is my own reflection inspired by literature on decision-making automation and black-box AI.
“Knowledge gives power, but awareness of ignorance gives wisdom.”This is my paraphrase of two classic ideas: “Knowledge is power” (Francis Bacon) and “I know that I know nothing” (Socrates). It merges the concept of humility with the philosophy of wisdom.
Joy Buolamwini & Timnit Gebru, *Gender Shades*, MIT Media Lab (2018): https://proceedings.mlr.press/v81/buolamwini18a.html
Shoshana Zuboff, *The Age of Surveillance Capitalism*: https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capitalism
Yuval Noah Harari, TED Talk: https://www.ted.com/talks/yuval_noah_harari_what_explains_the_rise_of_humans
Shannon Vallor, *Technology and the Virtues*: https://academic.oup.com/book/25951
TikTok vs YouTube engagement data: https://www.businessofapps.com/data/tik-tok-statistics/#10