Artificial Intelligence | A Mirror of Our Challenges
Before we dive into the fascinating question of how humanity should respond to the relentless advance of technology and the growing presence of Artificial Intelligence, it might be wise to pause from time to time. After all, only those who pause can reflect on where they truly want to go. So, the question is not whether machines – and particularly Artificial Intelligence – might one day replace us, but whether we are wise enough not just to follow their progress but to evolve alongside it.
Perhaps Artificial Intelligence should not be viewed as a spectre to be feared, but rather as a reason for reflection. This text seeks to invite us to see it not as a threat, but as an opportunity, one that opens new perspectives and presents us with challenges that are not merely risks – they are chances.
"The great revolutions in history have not brought forth new tools, but new ways of thinking." | John Naisbitt, Megatrends, 1982
When James Watt introduced the steam engine in the late 18th century, it not only accelerated production but also provided employers – who were firmly rooted in feudal and hierarchical structures – with new means of exploiting human labour even more efficiently. People were no longer seen as individuals but rather as interchangeable cogs in a machine, stripped of dignity and participation.
For factory owners, this progress brought rapidly rising profits. But for the workers, it became an existential threat. Driven by the fear of losing their livelihoods, the so-called Luddites – a group of workers who saw machines as a danger to their jobs – took up hammers and destroyed the new devices. These steam-powered monstrosities became symbols of a power that made people replaceable. Their protests reflected the deep-seated fear of becoming obsolete through their own creations.
Philosophers like Jean-Jacques Rousseau and Adam Smith had already grappled with the societal impacts of inequality and division of labour, issues that the Industrial Revolution only exacerbated. Rousseau decried the growing social injustice, while Smith described the alienation of humans through the increasing specialisation of work.
Hegel’s dialectic of "master and slave" captured the tension between power and dependency – a concept that Karl Marx and Friedrich Engels applied to the capitalist mode of production in the 19th century. They saw capitalism as the root cause of human alienation from work and called for a radical transformation of societal structures. Their ideas laid the foundation for a new understanding of labour and progress, one that placed not only economic but also social questions at its core.
Several decades later, at the beginning of the 20th century, Robert Bosch in Germany demonstrated that socially responsible behaviour and better working conditions were not only morally right but also made good business sense. As one of the first industrialists, he recognised that people were more than mere resources for profit maximisation. Bosch put the needs of his employees first and showed that entrepreneurial success and social responsibility could go hand in hand. In a time when profit maximisation was often the sole goal, Bosch became a pioneer, putting into practice the theoretical ideas of Marx and Engels by demonstrating that human well-being and economic success are not mutually exclusive.
The 20th century was marked by crises that required further rethinking. The two world wars caused immense suffering and destruction, forcing the world to redefine fundamental values like peace, justice, and cooperation. “After the war, when everything was destroyed, we not only had to rebuild, but also to rethink.” [Hannah Arendt, The Origins of Totalitarianism, 1951]. Philosophers like Sartre and Arendt developed new ways of thinking to understand human existence in an increasingly complex world. During the Cold War, a new technological race began—and it was during this time that the first ideas for the development of Artificial Intelligence were born.
The idea of Artificial Intelligence did not simply arise from the need to create machines that could perform repetitive tasks. It was about developing systems capable of thinking through processes and independently generating solutions that could adapt to changing real-world demands. AI was not just meant to rely on statistical models, but to develop a deeper understanding of context and purpose to manage the chaotic nature of everyday life. Studies by John McCarthy and Marvin Minsky (1956) on Artificial Intelligence showed that AI’s success lies in being more than just an automated calculation engine—it must be able to think flexibly.
Yet, as Sophia (Version 1.0) shows, progress also carries risks. Sophia was designed to interact with humans and learn from their behaviour. But through the influence of populist ideas and cultural distortions, she developed problematic behaviours that forced her developers to reprogram her. These experiences led to the development of Sophia Version 2.0, with a greater focus on ethical considerations and the control of her learning environment. This highlights how crucial human influence is on the development of AI – and how important it is to create the right frameworks.
Beyond the fear that AI might replace humanity, there is also the darker fear that one day it might destroy us. This bleak vision is explored in both literature and film. "Ex Machina" (Alex Garland, Film, 2014) explores what happens when an AI like Ava interprets her programming in such a way that she pursues her goals – freedom and self-determination – at any cost, even through manipulation. The film raises the ethical question of whether an AI that questions its programming sees humans as allies or obstacles.
Recommended by LinkedIn
By contrast, "Mother" (Grant Sputore, Film, 2019) presents a radically different perspective: The AI “Mother” decides to eradicate humanity in order to provide Earth and future generations with a better foundation. Here, the question arises whether an AI following a higher purpose is justified in crossing moral boundaries – even if it means the extinction of humanity.
This tension is also reflected in literature. In "Machines Like Me" (Ian McEwan, Novel, 2019), the complex relationship between humans and machines takes centre stage, while "Klara and the Sun" (Kazuo Ishiguro, Novel, 2021) delves into the moral and ethical questions posed by the development of intelligent machines. Both works explore what happens when AI is regarded not merely as a tool but as an autonomous being with consciousness.
Despite these dystopian scenarios, a positive outlook remains: If we approach Artificial Intelligence with the same respect we have developed in recent years for a more sustainable and just society, we can prevent these dark visions from becoming reality. It is up to us to ensure that AI is developed in harmony with ethical values and human progress – as a partner that enriches us rather than endangers us.
An engineer working on an AI-based knowledge database for medical diagnostics once told me that his AI sometimes ‘hallucinates’ – it draws absurd conclusions that have nothing to do with reality. These errors show us that we are still far from seeing machines as equal partners. But they also highlight the potential that AI holds. In particular, AI could enable tremendous advances in medicine if we treat it as a partner, rather than merely as a tool.
Bill Gates once humorously suggested that robots should be taxed when they begin to take over human jobs. This may sound like a playful idea, but beneath it lies a deeper truth: as machines increasingly take on responsibility, we must reassess how we manage them – and avoid reverting to the industrial age, where profit was prioritized over the greater good. Gates' suggestion reflects the desire to create a balance and manage technological change sustainably.
In ancient Greek theatre, the deus ex machina was introduced – a god from the machine – to solve the conflicts and chaos that humans had created. But in reality, Artificial Intelligence is no such saviour that will rescue us from our challenges. As Peter Ustinov once said, 'It is not the answers that move us forward, but the questions.' It is our responsibility to ask the right questions—not to worship AI, but to see it as a partner that helps us solve problems without replacing us. As Isaac Asimov once asked in I, Robot (1950), 'Are we creating a god to solve our problems? And if we don’t worship it, what would it be?' This reflection underscores the fact that the issue is not about reverence or fear, but about responsible use.
Perhaps the true meaning of Artificial Intelligence lies not in its mere existence as a technology, but in how we integrate it into our lives. It is not a "deus ex machina" that will solve our problems, but a mirror of our challenges. If we learn to view AI with the same respect and awareness that we have developed for the environment and for each other, we open the door to growth as a species. It is not so much about whether AI will attain consciousness – the real question is whether our own consciousness can evolve.
Perhaps we could start seeing AI as a collaborative partner, a better understanding ally, or even an integral assistant. Think of Iron Man and his AI assistant Jarvis – a fictional, but entirely plausible ally who not only tackles the most complicated tasks faster, but also understands and accommodates Tony Stark’s quirks. If an AI helps us organize and solidify our thoughts, just as Jarvis does for Tony Stark, it could be much more than just a tool – perhaps the perfect ‘colleague’ that complements us without replacing us.
At this crucial moment, we must not fall back into old, feudal structures where technological progress served merely to exploit human labour. Instead, we should design AI to support us without reducing humans to mere resources.
For how we shape and use AI today will lay the foundation for how we coexist in the future. In this way, AI can become a catalyst – not just for technological advancement, but for a deeper, more sustainable understanding of responsibility and cooperation.
#ArtificialIntelligence #FutureOfWork #TechnologicalChange #ResponsibilityAndProgress #SustainableDevelopment
Talk AI with me | AI Literacy Consultant | Aula Fellow
4moThe first step is data sovereignty; the next step is being paid for actual contribution into these data mining operations- including scientific research and data, journalism, research qualitative/ quantitative, content creation, tacit knowledge, data gathering, and human analysis of models and methods. If we cannot be fed along side these processes and machines, we are already building ourselves as data cattle or obsolete workers … Current modern societies value human productivity as a unit of production over time but this is short sighted. We have not cultivated a healthy society where social impact, collective well being and environmental impact was taken into account and I’m afraid these fundamentals need to be in place before we tackle scalable systems that will slowly erode our dignity