Artificial Intelligence (AI) is one of the biggest risks to our civilization
Artificial Intelligence (AI) is one of the biggest risks to our civilization
ChatGPT shows that artificial intelligence has gotten incredibly advanced — and that it is something we should all be worried about, according to tech billionaire Elon Musk.
ChatGPT is an advanced form of AI powered by a large language model called GPT-3. It is programmed to understand human language and generate responses based on huge bodies of data.
ChatGPT “has illustrated to people just how advanced AI has become,” “The AI has been advanced for a while. It just didn’t have a user interface that was accessible to most people.”
“One of the biggest risks to the future of civilization is AI,”
It’s both positive and negative and has great, great promise, great capability,” “with that comes great danger.”
Whereas cars, airplanes and medicine must abide by regulatory safety standards, AI does not yet has any rules or regulations keeping its development under control,
“I think we need to regulate AI safety, frankly,” “It is, I think, actually a bigger risk to society than cars or planes or medicine.”
Regulation “may slow down AI a little bit, but I think that that might also be a good thing,”
This has more gravity today, as the rise of ChatGPT threatens to upend the job market with more advanced, humanlike writing.
Initially it was created as an open-source nonprofit. Now it is closed-source and for profit.
Risks and Dangers of Artificial Intelligence
AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks.
As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder.
“The development of artificial intelligence could spell the end of the human race,” according to Stephen Hawking.
The renowned theoretical physicist isn’t alone with this thought.
“[AI] scares the hell out of me,” Tesla and SpaceX founder Elon Musk once said at the SXSW tech conference. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”
Whether it’s the increasing automation of certain jobs, gender and racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.
RISKS OF ARTIFICIAL INTELLIGENCE
1. Automation-spurred job loss
2. Privacy violations
3. Deepfakes
4. Algorithmic bias caused by bad data
5. Socioeconomic inequality
6. Market volatility
7. Weapons automatisation
Dangers of AI
Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.
IS ARTIFICIAL INTELLIGENCE A THREAT?
The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.
1. JOB LOSSES DUE TO AI AUTOMATION
AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. Eighty-five million jobs are expected to be lost to automation between 2020 and 2025, with Black and Latino employees left especially vulnerable.
Recommended by LinkedIn
“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. “I don’t think that’s going to continue.”
As AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while it’s true that AI will create 97 million new jobs by 2025, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workforces.
“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”
Even professions that require graduate degrees and additional post-college training aren’t immune to AI displacement.
As technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”
“Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure,” he said in regards to the legal field. “It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”
2. SOCIAL MANIPULATION THROUGH AI ALGORITHMS
A 2018 report on the potential abuses of AI lists social manipulation as one of the top dangers of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with a recent example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the 2022 election.
TikTok runs on an AI algorithm that saturates a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising doubts over TikTok’s ability to protect its users from dangerous and misleading media.
Online media and news have become even murkier in light of deepfakes infiltrating political and social spheres. The technology makes it easy to replace the image of one figure with another in a picture or video. As a result, bad actors have another avenue for sharing misinformation and war propaganda, creating a nightmare scenario where it can be nearly impossible to distinguish between creditable and faulty news.
“No one knows what’s real and what’s not,” said Ford. “So it really leads to a situation where you literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence… That’s going to be a huge issue.”
3. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY
In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is China’s use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views.
Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities. Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.
“Authoritarian regimes use or are going to use it,” Ford said. “The question is, how much does it invade Western countries, democracies, and what constraints do we put on it?”
4. BIASES DUE TO ARTIFICIAL INTELLIGENCE
Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased.
“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”
The limited experiences of AI creators may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history. Developers and businesses should exercise greater care to avoid recreating powerful biases and prejudices that put minority populations at risk.
5. WIDENING SOCIOECONOMIC INEQUALITY AS A RESULT OF AI
If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting. The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.
Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Blue-collar workers who perform more manual, repetitive tasks have experienced wage declines as high as 70 percent because of automation. Meanwhile, white-collar workers have remained largely untouched, with some even enjoying higher wages.
Sweeping claims that AI has somehow overcome social boundaries or created more jobs fail to paint a complete picture of its effects. It’s crucial to account for differences based on race, class and other categories. Otherwise, discerning how AI and automation benefit certain individuals and groups at the expense of others becomes more difficult.
6. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI
Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential socio-economic pitfalls. In a 2019 Vatican meeting titled, “The Common Good in the Digital Age,” Pope Francis warned against AI’s ability to “circulate tendentious opinions and false data” and stressed the far-reaching consequences of letting this technology develop without proper oversight or restraint.
“If mankind’s so-called technological progress were to become an enemy of the common good,” he added, “this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”
The rapid rise of the conversational AI tool ChatGPT gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. And even in attempts to make the tool less toxic, OpenAI exploited underpaid Kenyan laborers to perform the work.
Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.
“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”
CA Harshad Shah, harshadshah1953@yahoo.com