The social perils we must manage in pursuit of our technological fantasies

The social perils we must manage in pursuit of our technological fantasies

Part of the ORF’s Series 🡪 'AI F4: Facts, Fiction, Fears and Fantasies.'

Human imagination is the foundation for everything in the world around us. As motivational writer William Arthur Ward famously said “If you can imagine it, you can create it.”

Throughout history, it was the people with seemingly audacious and imaginative fantasies who invented so many of the technologies we take for granted today such as electricity, planes, organ transplants, the internet and mobile phones to name a few. They all have had far reaching implications for society and the global economy. With the convergences of multiple technologies in this 4th Industrial Revolution the technological fantasies inventors have are even more audacious – flying anywhere around the world in 30 minutes, eternal life extension, 3D printing entire cities, managing agriculture from space, communicating to the computer through thought and creating artificial intelligence that is equal to the cognitive capability of a human brain and being able to talk to animals in their own language are just some of the many technological fantasies that people are pursuing today.

It is these types of bold technological fantasies that drive innovation and will have several orders of impact on society. We have had many a general purpose technologies in our history (most notably fire and electricity) however what is different in our historical moment is the speed in which general purpose technologies are being introduced into our societies, economies and homes. AI builds on top of all previous technologies such as the internet, mobile phones, internet of things and synthetic biology. This rapid integration presents tremendous economic and social opportunity – but also comes with the potential for immense social peril.

The Perils: Social Cohesion and Mental Wellbeing at the Whim of Decision Architecture

As we live in an era of proliferating emerging technologies, it is worth appreciating the power of decision architecture. The best example to showcase this is the underappreciated and underestimated power that social media decision architecture has had on the fabric of our societies. When social media was created, the bold technological fantasy was ‘what if there could be a digital platform where everyone could be in touch with all their friends’. Since then, it has expanded to become a utility for businesses, a precision advertising platform and a loudspeaker for public officials and terrorists alike – and indeed a way to be in touch with friends. Over the years, the algorithms began to change and technologists and software engineers were asked to design algorithmic systems that incentivize keeping people on social media platforms for as long as possible so that they can view as many ads as possible. One such example is Facebook where 97.5% of all its revenue comes from ads. Corporate decisions that singularly focused on ad revenue, without regard for societal impact, have had many consequences for the mental health of all demographics across the population.

At the individual level, algorithmic decision architecture that exclusively favor corporate incentives without consideration for societal impact has adversely impacted mental wellbeing. The Wall Street Journal conducted an investigative journalism report on TikTok’s algorithm and found that it was able to classify and show viewers items they were interested in even if they did not explicitly search for it. More concerningly they found that what keeps people viewing more videos on the platform isn’t necessarily what they are interested in and what they like, but what they are most “vulnerable” to. Facebook and Instagram have been found to exacerbate body image issues for teenagers, particularly girls. Meta is aware of it and how its platforms create low self-esteem for kids and teens. A study on the social media use of adolescents who committed suicide found various themes relating to the harmful effects of social media such as “dependency, triggers, cybervictimization and psychological entrapment.” Award winning actress Kate Winslet’s latest film “I am Ruth” is an intimate portrayal of the struggle a mother faces watching her daughter succumb to these pressures of social media. These negative consequences of AI on social media have not been fully managed and continue to cause harm today.

At the society level, algorithmic decision architecture that exclusively favors corporate incentives without consideration of downstream order effects has created polarized and fragmented societies. The Center for Humane Technology outlines the impact of artificial intelligence on society in their A.I. Dilemma presentation by Aza Raskin and Tristan Harris. In this presentation they create a distinction between society’s first interaction with AI which was through social media; and they label society’s second interaction with AI as occurring in 2023 with the current and emerging generativeAI tools. The harms they have identified from society’s algorithmic interaction with social media are: information overload, doom-scrolling, addiction, shortened attention spans, sexualization of kids, polarization, fake news, cult factories, deepfake bots and the breakdown of democracy. While the software developers did not have malicious intent, they did opt out of their duty of care to society when they singularly focused on creating algorithms that were incentivized to maximize engagement on the platform. Raskin and Harris’ assessment of what is socially unfolding with the next interaction with AI is a reality collapse due to an excess of fake everything resulting in a trust collapse. In his book The Coming Wave, Mustafa Suleyman (founder of Google’s DeepMind) also flags his concern about the ubiquitous nature of generativeAI and its ability to democratize the creation of cyberweapons, exploit code and our very biology. These concerns are unfolding each day and they too have not been fully managed and are notable threats today.

While it is important to grapple with the social challenges of existing algorithms, there are new algorithms and new ways that they will come deeper into our lives and the lives of our children.


The Duty of Care: New Fantasies, New Technologies, New Responsibilities

Our imaginations are very powerful as each day we are starting to see professionals of all backgrounds be empowered by artificial intelligence to realize what was once fictional. A growing space that combines AI fictions, fears and fantasies is bringing back the dead. As a coping mechanism for dealing with grief after her best friend passed away, Eugenia Kuyda created a conversational AI bot of him based on all their text exchanges. She did this so that she could continue to chat with him posthumously. Out of this experience she created the company Replika where anyone can create their personalized AI companion to chat with. The testimonials feature many happy users who feel they found a friend and that this digital algorithmic companion has alleviated their loneliness. In fact, Replika and other digital companion AI companies are creating a valuable technology that addresses a growing social problem. In May 2023 the US Surgeon General released an advisory report calling out the public health crisis of loneliness and social isolation. Currently there are two countries in the world who have appointed a Minister for Loneliness – the UK and Japan. However studies show that this epidemic of loneliness and social exclusion has a strong foothold in Africa and India as well and potentially further in other parts of the world where studies have not yet been conducted.

Given the adverse social implications AI has had with social media, as new AI based chatbots and digital companions are created to alleviate the growing problem of loneliness it will be imperative to consider the first rule in the "Three Rules of Humane Tech" outlined by the Center for Humane Technology: “When you invent a new technology, you uncover a new class of responsibilities.” This rule is not only relevant to those who invent a new technology, but it is applicable to all those who use this technology and iterate on it. Within the European Union's General Data Protection Regulation (GDPR) policy governing how personal data is managed, there is a section on the “Right to be Forgotten” which did not need to exist until computers could remember us in perpetuity. Will new laws need to be created to force companies to maintain the cloud infrastructure of digital companions across a lifetime of an individual? Will rights be needed for these algorithmic companions so that those who rely on them do not have to grieve, or feel lonely without them? What if people want to marry their AI companion? If AI companions become an important form of social infrastructure and part of human intimacy, new laws will need to be created to protect these customized algorithms and access to them. The new European Union Digital Services Act (DSA) has areas in which such questions would apply. In a global first, the DSA regulation aims to combat illegal content and protect users’ fundamental human rights. Including the responsibility of platforms to protect the mental health of users. Having an emotional and intimate relationship with a curated algorithm (particularly one that emulates that of a deceased loved one) will be an important consideration in this space. The White House Executive Order on AI Safety uses similar language when discussing the management of AI risks and advocates for promoting “responsible” innovation. This responsible innovation space will require a continuous back and forth between government legislators, society and ethicists to determine how these new technologies create new responsibilities. The Center for Human Technology has offered three rules for ‘Human Tech’ which will be increasingly more relevant as new and invasive AI use-cases emerge. The three rules are:

  • Rule 1: When we invent a new technology, we uncover a new class of responsibility
  • Rule 2: If that new technology confers power, it will start a race.
  • Rule 3: If we don’t coordinate, the race will end in tragedy.

In due course, there will be new and expanded government policies and regulations to mitigate and manage the social implications of existing and new AI technologies, yet, the current state of technological advancement continues to exceed the speed in which regulations are legislated and technologists have an important role to play. The absence of regulation does not mean companies should abdicate their responsibility of it. In an era where loneliness and isolation are on the rise, as indicated by the World Health Organization making it a global health priority with a new Commission on Social Connection, those who design algorithms have an outsized role to play in creating algorithmic systems that do not destroy social cohesion, exacerbate loneliness or push teenagers to take their own lives. Those who leverage algorithms built by others have an important role to play in holding themselves and others accountable to ensuring algorithmic systems cause no harm.

In the meantime, the wild technological fantasy we should all embrace today, should be to design algorithmic systems that create incentives for human flourishing and socio-economic prosperity.



✨ If you'd like creative new business strategies for growth and competitiveness reach out as there is a world of opportunity and I'd love to help you go after it!🚀

Whenever you’re ready, here are three ways I can help you:

1. Consulting: reach out to explore the strategic advisory services I offer (tech reports, bespoke research and futures workshops) focusing on business growth opportunity areas, competitiveness, moat protection and emerging technologies. Check out my Linkedin profile for the over one hundred glowing reviews about my advisory work.

2. Board Advisor: if you are looking to diversify your board's thought leadership with complex systems thinking, emerging technologies, a world view and a team player contact me.

3. Speaking & Workshops: my global speaking engagements on emerging technologies, the 4th industrial revolution, artificial intelligence and the importance of imagination in an era of AI have been well received with comments like "I wish I met Lydia sooner" and "I deeply admire her intelligence and humanity, and of her ability to challenge and explore how to make any enterprise more impactful and successful toward the greater good.” Reach out to book me for a talk.

Charlie Black, PhD

Co-Founder @ Xundis Global, LLC | Advisor | Independent Director | Speaker | Marine Veteran | Cultivates Resilient Teams that Succeed in Complexity.

10mo

Nicely done Lydia. I’m reminded of an old film Metropolis (1927) that foreshadowed the social consequences of adopting advanced technology without forethought, constraint and ethics.

Mahgul Nikolo

Zero to Millions Club Mentor | Tech Disruptor | Helping Founders Raise Millions, Fast! 🏳️🌈

10mo

Fantastic insights on the intersection of technology and ethics! Looking forward to reading your thoughts. 🌟

Laszlo Farkas

Data Centre Engineer

10mo

The ethics around technology are a crucial aspect to explore as we envision the future. Looking forward to reading your thoughts! 👍

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics