Where is research into artificial intelligence (AI) heading? Is it all beneficial for humanity, or are there risks big enough that we need to make more effort to understand them and develop countermeasures? I believe the latter is true.
The human brain is a biological machine, so it should be feasible to build machines at least as intelligent. As has been argued by Geoff Hinton, with whom Yann LeCun and I shared the Turing Award in 2018 for our work on AI, once we comprehend the main principles underlying human intelligence we will be capable of building superhuman AI surpassing us in most tasks.
Computers already outperform us in specialised areas such as playing Go or modelling protein structures, and we are making progress towards building more general-purpose AI systems: ChatGPT can quickly process a huge training corpus from the internet, a task that would take a human tens of thousands of lifetimes dedicated solely to reading. This is achievable because, while learning, computers can perform extensive parallel computations and share data among themselves at rates billions of times faster than humans, who are limited to exchanging a few bits per second using language. Additionally, computer programs are, unlike humans, immortal, capable of being copied or self-replicating like computer viruses.
When can we expect such superhuman AIs? Until recently, I placed my 50% confidence interval between a few decades and a century. Since GPT-4, I have revised my estimate down to between a few years and a couple of decades, a view shared by my Turing award co-recipients. What if it occurs in five years? Or even ten? OpenAI, the company behind GPT, is among those who think it could happen by then.
Are we prepared for this possibility? Do we comprehend the potential consequences? No matter the potential benefits, disregarding or playing down catastrophic risks would be very unwise.
How might such catastrophes arise? There are always misguided or ill-intentioned people, so it seems highly probable that at least one organisation or person would—intentionally or not—misuse a powerful tool once it became widely available.
We are not there yet, but imagine a scenario where a method for achieving superhuman AI becomes publicly known and the model downloadable and usable with the resources accessible to a mid-sized company, as is currently the case with open-source—but fortunately not superhuman—large language models. What are the chances that at least one such organisation would download the model and instruct it, possibly using a natural language interface like ChatGPT, to achieve goals that would violate human rights, undermine democracy or threaten humanity as a whole? Examples include targeted cyber-attacks that could threaten fragile supply chains, using convincing dialogue and AI-generated videos to influence citizens to sway an election, or designing and deploying bioweapons.
Another possibility is that AI acquires a self-preservation tendency. This could happen in many ways: training it to mimic humans (who exhibit self-preservation); instructions from humans that make it seek power, and hence pursue self-preservation as a subsidiary goal; or a “Frankenstein” scenario, where someone intentionally builds an AI system with a survival instinct, to make the AI in their own image.
New entities with self-preservation are like new species: to preserve themselves, these AIs would strive to prevent humans from shutting them down, attempt to replicate themselves in multiple locations as a defensive measure and potentially behave in ways that harm humans. Once another species on Earth surpasses us in intelligence and power, we may lose control over our own future.
But could such a rogue AI shape the real world? If, like AutoGPT, it was connected to the internet, it could develop numerous strategies or learn them from us: exploiting cyber-security vulnerabilities, employing human assistants (including organised crime), creating accounts to generate income (for instance in financial markets), and influencing or using extortion against key decision-makers. A superhuman AI, whether of its own volition or following human instructions, could destabilise democracies, disrupt supply chains, invent new weapons or worse.
Even if we knew how to construct AI that was not prone to developing dangerous objectives, and even if we implemented strong regulations to minimise access and enforce safety protocols, there is still a possibility of someone getting in, ignoring the protocols and instructing the AI with disastrous consequences. Given these risks, and the challenges around regulating AI safely, I believe it is crucial to act swiftly on three fronts.
First, the world’s governments and legislatures must adopt national rules, and co-ordinate international ones, that safeguard the public against all AI-related harms and risks. These should prohibit the development and deployment of AI systems with dangerous capabilities; and require thorough evaluation of potential harm with independent audits, with at least the same level of scrutiny as that applied to the pharmaceutical, aviation and nuclear industries.
Second, they should accelerate AI safety and governance research to better understand options for robust safety protocols and governance, and how best to protect human rights and democracy.
Third, we need to research and develop countermeasures in case dangerous AIs arise, whether they are under human instruction or have developed their own goals to preserve themselves. Such research should be co-ordinated internationally and under an appropriate governance umbrella, to make sure that the countermeasures can be deployed around the world and that this work has no military aims, reducing the risk of an AI arms race. This research, combining national-security and AI expertise, should be done by neutral and autonomous entities across several countries (to avoid capture by a single government, which could use its control of AI technology to keep itself in power or attack other countries). It should not be entrusted solely to national labs or for-profit organisations, whose parochial or commercial objectives might interfere with the mission of defending humanity as a whole.
Considering the immense potential for damage, we should make investments to safeguard our future comparable to, if not exceeding, past investments in the space programme or current investments in nuclear fusion. Much is being spent on improving AI capabilities. It is crucial that we invest at least as much in protecting humanity.
Yoshua Bengio is a professor at the the Université de Montréal and founder and scientific director of Mila - Quebec AI Institute.
For a contrary view on AI and existential risk, see this article by Blake Richards, Dhanya Sridhar, Guillaume Lajoie and Blaise Agüera y Arcas.
© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.