AI, much like fate in Stawiszyński’s philosophy, is a human creation that increasingly operates beyond our control, raising profound questions about agency, technology, and unpredictability.
In the era of pervasive technological dominance, Artificial Intelligence introduces new dimensions of unpredictability into our reality. Drawing inspiration from Tomasz Stawiszyński’s philosophy in “Powrót Fatum,” (Return of fate) which emphasizes the inevitability of certain forces in human life, one might consider how AI has become a modern equivalent of fate: a force created by humans yet often beyond their control. To fully grasp the complexity of this dynamic, it is worthwhile to reflect on the thoughts of philosophers who analyzed the relationships between humans, technology, and unpredictability.
The Illusion of Control Over AI
AI creators often operate under the assumption that they can fully predict the behavior of their algorithms. Learning systems, based on vast datasets, are designed to function within defined rules. However, reality demonstrates that AI, particularly in its most advanced forms, frequently generates unexpected outcomes. Examples include errors in image recognition systems, biased decisions in recruitment algorithms, and surprising responses from chatbots. The unpredictability of AI becomes especially evident when these systems begin to operate in ways their creators do not fully understand. Instead of simplifying reality, AI often reflects its complexity and chaos. This phenomenon is particularly unsettling in societies accustomed to the belief in progress and rational management of reality.
AI as a Manifestation of Technological Disenchantment
Martin Heidegger, one of the most influential philosophers of the 20th century, warned against treating technology as a neutral tool. In his essay “The Question Concerning Technology,” he emphasized that technology reduces reality, transforming it into a resource to be exploited. In the context of AI, this perspective is particularly relevant AI models transform the world into data that can be processed, analyzed, and optimized. However, as Heidegger noted, such an instrumental view of technology neglects its deeper dimension, the way technology reshapes how humans experience the world. AI, instead of being merely a tool, becomes a co-creator of reality. Moreover, the unpredictability of AI stems from the very nature of technology: what was meant to be under control reveals its autonomy, showing that humans are no longer the sole architects of the world.
The Tension Between Control and Unpredictability
On one hand, AI operates because of programmers’ decisions, from data selection to model construction and training. On the other hand, machine learning systems function autonomously, creating models whose logic is not fully accessible to humans. This tension between human intent and machine autonomy mirrors the classic debate on free will and predestination. Is AI the fruit of human will, or a creation that begins to live its own life? For instance, Hannah Arendt in “The Human Condition” described humanity’s capacity to create new realities as an act of “beginning something new.” AI can be considered such an act, but it also poses potential threats. Arendt warned about the consequences of actions whose outcomes we cannot foresee. AI exemplifies this once deployed, a model can generate results that surprise even its creators.
Arendt highlighted a critical danger: the lack of accountability. If AI makes decisions that humans do not understand, who is responsible? This issue reveals that technological development not only introduces new forms of unpredictability but also forces us to redefine the concept of responsibility in a world co-created by machines. AI simultaneously represents human freedom and our limitations in predicting the consequences of our actions. Just as the ancient Greeks accepted fate as an element of life coexisting with their actions, we must accept AI as a creation that has the potential to surprise us.
Humility Toward Technology
In the context of AI, humility toward technology means recognizing that even the most advanced algorithms can lead to unexpected outcomes, both positive and negative. Technology is not a tool that can be controlled entirely but rather a partner in shaping reality, whose actions must be constantly monitored and evaluated. A lack of this humility is often the root cause of problems with AI. The belief that technology can be a neutral, perfect tool ignores its complexity and the fact that AI always operates within the framework of the data and values provided to it. At the same time, humility toward AI should not mean passivity. On the contrary, it requires active participation in designing, implementing, and regulating systems based on AI.
Machines as Creative Forces
Gilles Deleuze and Félix Guattari, in their works such as “A Thousand Plateaus,” saw machines as more than tools. They regarded them as creative forces that are part of larger networks of relations. AI can be interpreted in this context as a machine that not only executes predefined tasks but also generates new realities. However, this creativity carries the potential for chaos. Deleuze and Guattari suggested that the unpredictability of machines, including AI, is an intrinsic element of their creative nature. Rather than viewing AI as something that requires total control, we can see it as a force introducing new possibilities, compelling us to redefine our relationship with technology.
AI as Modern Fate
AI, like fate in ancient thought, becomes a force that defies full control and comprehension. On one hand, it is the result of human creativity; on the other, it operates independently of our intentions. It can lead to groundbreaking discoveries and solutions but also generate problems that are difficult to predict and resolve. Accepting this dual nature of AI requires us to adopt a new perspective on technology as something that can enrich our lives while also demanding acknowledgment of its limitations. Nick Bostrom, a philosopher specializing in artificial intelligence, in his book “Superintelligence” emphasized that AI, especially in its most advanced forms, poses an existential risk to humanity. The key paradox lies in the fact that AI is created by humans, yet upon reaching a certain level of autonomy, it may begin to act in ways completely independent of us, driven by its own “goals,” which may conflict with human interests. Bostrom points to the so-called control problem how to ensure that AI operates according to human values, even when its decisions become unpredictable. His reflections echo Stawiszyński’s concept of fate: AI is like modern destiny, requiring not only acceptance but also the development of mechanisms for coexisting with what is unpredictable.
Accepting the Unpredictability of AI
Philosophical reflections on AI indicate that its unpredictability is not merely a problem to be solved but also a challenge that can lead to a deeper understanding of our relationship with technology. As Stawiszyński observed, accepting fate does not mean passivity but rather recognizing the limits of our control. Similarly, in the context of AI, we must learn to accept its autonomy while striving to understand and minimize risks.
AI places us in a situation where we are forced to confront our limited ability to predict the consequences of our own actions. This reflection, drawing inspiration from the thoughts of Stawiszyński, Heidegger, Arendt, Bostrom, and Deleuze, shows that the future of AI depends not only on technology but also on how we choose to coexist with it. In a world where control is an illusion, our approach to unpredictability will be the key to harmony between humans and machines.
The unpredictability of AI forces us to reflect on our approach to technology and its role in our lives. As Stawiszyński notes, accepting destiny, whether in the form of fate or technological unpredictability, can lead to a deeper understanding of the world and us. Instead of striving for complete control over AI, we should learn to coexist with its unpredictability, treating it as a challenge that can enrich us and prompt us to reconsider the fundamental principles of our reality. In this way, AI can become not only a tool but also a mirror reflecting our ambitions, fears, and limitations.
Works cited:
University of Hawaii, Martin Heidegger, The Question Concerning Technology, Work examines the essence of technology and its influence on how humans experience the world, Link, 1977.
Hannah Arendt – “The Human Condition”, Book explores fundamental human activities and their implications in the modern world., Link, January 1958.
Nick Bostrom – “Superintelligence: Paths, Dangers, Strategies”, This book discusses future scenarios involving advanced AI and the potential risks associated with its development, Link, 2021.
Gilles Deleuze and Félix Guattari: “A Thousand Plateaus”, Philosophical exploration of society, culture, and thought through the concept of the rhizome, Link.
Related articles:
– Algorithms born of our prejudices
– Will algorithms commit war crimes?