ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Whether it’s based on hallucinatory beliefs or not, an artificial-intelligence gold rush has started over the last several months to mine the anticipated business opportunities from generative AI models like ChatGPT. App developers, venture-backed startups, and some of the world’s largest corporations are all scrambling to make sense of the sensational text-generating bot released by OpenAI last November.
You can practically hear the shrieks from corner offices around the world: “What is our ChatGPT play? How do we make money off this?”
But while companies and executives see a clear chance to cash in, the likely impact of the technology on workers and the economy on the whole is far less obvious. Despite their limitations—chief among of them their propensity for making stuff up—ChatGPT and other recently released generative AI models hold the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analyzing data. That has left economists unsure how jobs and overall productivity might be affected.
For all the amazing advances in AI and other digital tools over the last decade, their record in improving prosperity and spurring widespread economic growth is discouraging. Although a few investors and entrepreneurs have become very rich, most people haven’t benefited. Some have even been automated out of their jobs.
Productivity growth, which is how countries become richer and more prosperous, has been dismal since around 2005 in the US and in most advanced economies (the UK is a particular basket case). The fact that the economic pie is not growing much has led to stagnant wages for many people.
What productivity growth there has been in that time is largely confined to a few sectors, such as information services, and in the US to a few cities—think San Jose, San Francisco, Seattle, and Boston.
Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it help? Could it in fact provide a much-needed boost to productivity?
ChatGPT, with its human-like writing abilities, and OpenAI’s other recent release DALL-E 2, which generates images on demand, use large language models trained on huge amounts of data. The same is true of rivals such as Claude from Anthropic and Bard from Google. These so-called foundational models, such as GPT-3.5 from OpenAI, which ChatGPT is based on, or Google’s competing language model LaMDA, which powers Bard, have evolved rapidly in recent years.
They keep getting more powerful: they’re trained on ever more data, and the number of parameters—the variables in the models that get tweaked—is rising dramatically. Earlier this month, OpenAI released its newest version, GPT-4. While OpenAI won’t say exactly how much bigger it is, one can guess; GPT-3, with some 175 billion parameters, was about 100 times larger than GPT-2.
But it was the release of ChatGPT late last year that changed everything for many users. It’s incredibly easy to use and compelling in its ability to rapidly create human-like text, including recipes, workout plans, and—perhaps most surprising—computer code. For many non-experts, including a growing number of entrepreneurs and businesspeople, the user-friendly chat model—less abstract and more practical than the impressive but often esoteric advances that have been brewing in academia and a handful of high-tech companies over the last few years—is clear evidence that the AI revolution has real potential.
Venture capitalists and other investors are pouring billions into companies based on generative AI, and the list of apps and services driven by large language models is growing longer every day.
Among the big players, Microsoft has invested a reported $10 billion in OpenAI and its ChatGPT, hoping the technology will bring new life to its long-struggling Bing search engine and fresh capabilities to its Office products. In early March, Salesforce said it will introduce a ChatGPT app in its popular Slack product; at the same time, it announced a $250 million fund to invest in generative AI startups. The list goes on, from Coca-Cola to GM. Everyone has a ChatGPT play.
Meanwhile, Google announced it is going to use its new generative AI tools in Gmail, Docs, and some of its other widely used products.
Still, there are no obvious killer apps yet. And as businesses scramble for ways to use the technology, economists say a rare window has opened for rethinking how to get the most benefits from the new generation of AI.
“We’re talking in such a moment because you can touch this technology. Now you can play with it without needing any coding skills. A lot of people can start imagining how this impacts their workflow, their job prospects,” says Katya Klinova, the head of research on AI, labor, and the economy at the Partnership on AI in San Francisco.
“The question is who is going to benefit? And who will be left behind?” says Klinova, who is working on a report outlining the potential job impacts of generative AI and providing recommendations for using it to increase shared prosperity.
The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.
Helping the least skilled
The question of ChatGPT’s impact on the workplace isn’t just a theoretical one.
In the most recent analysis, OpenAI’s Tyna Eloundou, Sam Manning, and Pamela Mishkin, with the University of Pennsylvania’s Daniel Rock, found that large language models such as GPT could have some effect on 80% of the US workforce. They further estimated that the AI models, including GPT-4 and other anticipated software tools, would heavily affect 19% of jobs, with at least 50% of the tasks in those jobs “exposed.” In contrast to what we saw in earlier waves of automation, higher-income jobs would be most affected, they suggest. Some of the people whose jobs are most vulnerable: writers, web and digital designers, financial quantitative analysts, and—just in case you were thinking of a career change—blockchain engineers.
“There is no question that [generative AI] is going to be used—it’s not just a novelty,” says David Autor, an MIT labor economist and a leading expert on the impact of technology on jobs. “Law firms are already using it, and that’s just one example. It opens up a range of tasks that can be automated.”
Autor has spent years documenting how advanced digital technologies have destroyed many manufacturing and routine clerical jobs that once paid well. But he says ChatGPT and other examples of generative AI have changed the calculation.
Previously, AI had automated some office work, but it was those rote step-by-step tasks that could be coded for a machine. Now it can perform tasks that we have viewed as creative, such as writing and producing graphics. “It’s pretty apparent to anyone who’s paying attention that generative AI opens the door to computerization of a lot of kinds of tasks that we think of as not easily automated,” he says.
The worry is not so much that ChatGPT will lead to large-scale unemployment—as Autor points out, there are plenty of jobs in the US—but that companies will replace relatively well-paying white-collar jobs with this new form of automation, sending those workers off to lower-paying service employment while the few who are best able to exploit the new technology reap all the benefits.
In this scenario, tech-savvy workers and companies could quickly take up the AI tools, becoming so much more productive that they dominate their workplaces and their sectors. Those with fewer skills and little technical acumen to begin with would be left further behind.
But Autor also sees a more positive possible outcome: generative AI could help a wide swath of people gain the skills to compete with those who have more education and expertise.
One of the first rigorous studies done on the productivity impact of ChatGPT suggests that such an outcome might be possible.
Two MIT economics graduate students, Shakked Noy and Whitney Zhang, ran an experiment involving hundreds of college-educated professionals working in areas like marketing and HR; they asked half to use ChatGPT in their daily tasks and the others not to. ChatGPT raised overall productivity (not too surprisingly), but here’s the really interesting result: the AI tool helped the least skilled and accomplished workers the most, decreasing the performance gap between employees. In other words, the poor writers got much better; the good writers simply got a little faster.
The preliminary findings suggest that ChatGPT and other generative AIs could, in the jargon of economists, “upskill” people who are having trouble finding work. There are lots of experienced workers “lying fallow” after being displaced from office and manufacturing jobs over the last few decades, Autor says. If generative AI can be used as a practical tool to broaden their expertise and provide them with the specialized skills required in areas such as health care or teaching, where there are plenty of jobs, it could revitalize our workforce.
Determining which scenario wins out will require a more deliberate effort to think about how we want to exploit the technology.
“I don’t think we should take it as the technology is loose on the world and we must adapt to it. Because it’s in the process of being created, it can be used and developed in a variety of ways,” says Autor. “It’s hard to overstate the importance of designing what it’s there for.”
Simply put, we are at a juncture where either less-skilled workers will increasingly be able to take on what is now thought of as knowledge work, or the most talented knowledge workers will radically scale up their existing advantages over everyone else. Which outcome we get depends largely on how employers implement tools like ChatGPT. But the more hopeful option is well within our reach.
Beyond human-like
There are some reasons to be pessimistic, however. Last spring, in “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” the Stanford economist Erik Brynjolfsson warned that AI creators were too obsessed with mimicking human intelligence rather than finding ways to use the technology to allow people to do new tasks and extend their capabilities.
The pursuit of human-like capabilities, Brynjolfsson argued, has led to technologies that simply replace people with machines, driving down wages and exacerbating inequality of wealth and income. It is, he wrote, “the single biggest explanation” for the rising concentration of wealth.
A year later, he says ChatGPT, with its human-sounding outputs, “is like the poster child for what I warned about”: it has “turbocharged” the discussion around how the new technologies can be used to give people new abilities rather than simply replacing them.
Despite his worries that AI developers will continue to blindly outdo each other in mimicking human-like capabilities in their creations, Brynjolfsson, the director of the Stanford Digital Economy Lab, is generally a techno-optimist when it comes to artificial intelligence. Two years ago, he predicted a productivity boom from AI and other digital technologies, and these days he’s bullish on the impact of the new AI models.
Much of Brynjolfsson’s optimism comes from the conviction that businesses could greatly benefit from using generative AI such as ChatGPT to expand their offerings and improve the productivity of their workforce. “It’s a great creativity tool. It’s great at helping you to do novel things. It’s not simply doing the same thing cheaper,” says Brynjolfsson. As long as companies and developers can “stay away from the mentality of thinking that humans aren’t needed,” he says, “it’s going to be very important.”
Within a decade, he predicts, generative AI could add trillions of dollars in economic growth in the US. “A majority of our economy is basically knowledge workers and information workers,” he says. “And it’s hard to think of any type of information workers that won’t be at least partly affected.”
When that productivity boost will come—if it does—is an economic guessing game. Maybe we just need to be patient.
In 1987, Robert Solow, the MIT economist who won the Nobel Prize that year for explaining how innovation drives economic growth, famously said, “You can see the computer age everywhere except in the productivity statistics.” It wasn’t until later, in the mid and late 1990s, that the impacts—particularly from advances in semiconductors—began showing up in the productivity data as businesses found ways to take advantage of ever cheaper computational power and related advances in software.
Could the same thing happen with AI? Avi Goldfarb, an economist at the University of Toronto, says it depends on whether we can figure out how to use the latest technology to transform businesses as we did in the earlier computer age.
So far, he says, companies have just been dropping in AI to do tasks a little bit better: “It’ll increase efficiency—it might incrementally increase productivity—but ultimately, the net benefits are going to be small. Because all you’re doing is the same thing a little bit better.” But, he says, “the technology doesn’t just allow us to do what we’ve always done a little bit better or a little bit cheaper. It might allow us to create new processes to create value to customers.”
The verdict on when—even if—that will happen with generative AI remains uncertain. “Once we figure out what good writing at scale allows industries to do differently, or—in the context of Dall-E—what graphic design at scale allows us to do differently, that’s when we’re going to experience the big productivity boost,” Goldfarb says. “But if that is next week or next year or 10 years from now, I have no idea.”
Power struggle
When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the new generation of large language models such as ChatGPT, he did what a lot of us did: he began playing around with them to see how they might help his work. He carefully documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).
ChatGPT did explain one of the most fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the mistake, easily spotted, was quickly forgiven in light of the benefits. “I can tell you that it makes me, as a cognitive worker, more productive,” he says. “Hands down, no question for me that I’m more productive when I use a language model.”
When GPT-4 came out, he tested its performance on the same 25 questions that he documented in February, and it performed far better. There were fewer instances of making stuff up; it also did much better on the math assignments, says Korinek.
Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen far more quickly than in past technological revolutions, says Korinek. “I think we may see a greater boost to productivity by the end of the year—certainly by 2024,” he says.
What’s more, he says, in the longer term, the way the AI models can make researchers like himself more productive has the potential to drive technological progress.
That potential of large language models is already turning up in research in the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert on using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for the kinds of sophisticated machine-learning studies his group does to predict the properties of compounds.
“He failed completely,” jokes Smit.
It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model performs as well as advanced machine-learning tools specially developed for chemistry in answering basic questions about things like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it can predict various properties based on the structure.
As in other areas of work, large language models could help expand the expertise and capabilities of non-experts—in this case, chemists with little knowledge of complex machine-learning tools. Because it’s as simple as a literature search, Jablonka says, “it could bring machine learning to the masses of chemists.”
These impressive—and surprising—results are just a tantalizing hint of how powerful the new forms of AI could be across a wide swath of creative work, including scientific discovery, and how shockingly easy they are to use. But this also points to some fundamental questions.
As the potential impact of generative AI on the economy and jobs becomes more imminent, who will define the vision for how these tools should be designed and deployed? Who will control the future of this amazing technology?
Diane Coyle, an economist at Cambridge University in the UK, says one concern is the potential for large language models to be dominated by the same big companies that rule much of the digital world. Google and Meta are offering their own large language models alongside OpenAI, she points out, and the large computational costs required to run the software create a barrier to entry for anyone looking to compete.
The worry is that these companies have similar “advertising-driven business models,” Coyle says. “So obviously you get a certain uniformity of thought, if you don’t have different kinds of people with different kinds of incentives.”
Coyle acknowledges that there are no easy fixes, but she says one possibility is a publicly funded international research organization for generative AI, modeled after CERN, the Geneva-based intergovernmental European nuclear research body where the World Wide Web was created in 1989. It would be equipped with the huge computing power needed to run the models and the scientific expertise to further develop the technology.
Such an effort outside of Big Tech, says Coyle, would “bring some diversity to the incentives that the creators of the models face when they’re producing them.”
While it remains uncertain which public policies would help make sure that large language models best serve the public interest, says Coyle, it’s becoming clear that the choices about how we use the technology can’t be left to a few dominant companies and the market alone.
History provides us with plenty of examples of how important government-funded research can be in developing technologies that bring about widespread prosperity. Long before the invention of the web at CERN, another publicly funded effort in the late 1960s gave rise to the internet, when the US Department of Defense supported ARPANET, which pioneered ways for multiple computers to communicate with each other.
In Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity, the MIT economists Daron Acemoglu and Simon Johnson provide a compelling walk through the history of technological progress and its mixed record in creating widespread prosperity. Their point is that it’s critical to deliberately steer technological advances in ways that provide broad benefits and don’t just make the elite richer.
From the decades after World War II until the early 1970s, the US economy was marked by rapid technological changes; wages for most workers rose while income inequality dropped sharply. The reason, Acemoglu and Johnson say, is that technological advances were used to create new tasks and jobs, while social and political pressures helped ensure that workers shared the benefits more equally with their employers than they do now.
In contrast, they write, the more recent rapid adoption of manufacturing robots in “the industrial heartland of the American economy in the Midwest” over the last few decades simply destroyed jobs and led to a “prolonged regional decline.”
The book, which comes out in May, is particularly relevant for understanding what today’s rapid progress in AI could bring and how decisions about the best way to use the breakthroughs will affect us all going forward. In a recent interview, Acemoglu said they were writing the book when GPT-3 was first released. And, he adds half-jokingly, “we foresaw ChatGPT.”
Acemoglu maintains that the creators of AI “are going in the wrong direction.” The entire architecture behind the AI “is in the automation mode,” he says. “But there is nothing inherent about generative AI or AI in general that should push us in this direction. It’s the business models and the vision of the people in OpenAI and Microsoft and the venture capital community.”
If you believe we can steer a technology’s trajectory, then an obvious question is: Who is “we”? And this is where Acemoglu and Johnson are most provocative. They write: “Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda … One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies.”
The creators of ChatGPT and the businesspeople involved in bringing it to market, notably OpenAI’s CEO, Sam Altman, deserve much credit for offering the new AI sensation to the public. Its potential is vast. But that doesn’t mean we must accept their vision and aspirations for where we want the technology to go and how it should be used.
According to their narrative, the end goal is artificial general intelligence, which, if all goes well, will lead to great economic wealth and abundances. Altman, for one, has promoted the vision at great length recently, providing further justification for his longtime advocacy of a universal basic income (UBI) to feed the non-technocrats among us. For some, it sounds tempting. No work and free money! Sweet!
It’s the assumptions underlying the narrative that are most troubling—namely, that AI is headed on an inevitable job-destroying path and most of us are just along for the (free?) ride. This view barely acknowledges the possibility that generative AI could lead to a creativity and productivity boom for workers far beyond the tech-savvy elites by helping to unlock their talents and brains. There is little discussion of the idea of using the technology to produce widespread prosperity by expanding human capabilities and expertise throughout the working population.
As Acemoglu and Johnson write: “We are heading toward greater inequality not inevitably but because of faulty choices about who has power in society and the direction of technology … In fact, UBI fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest.”
Acemoglu and Johnson write of various tools for achieving “a more balanced technology portfolio,” from tax reforms and other government policies that might encourage the creation of more worker-friendly AI to reforms that might wean academia off Big Tech’s funding for computer science research and business schools.
But, the economists acknowledge, such reforms are “a tall order,” and a social push to redirect technological change is “not just around the corner.”
The good news is that, in fact, we can decide how we choose to use ChatGPT and other large language models. As countless apps based on the technology are rushed to market, businesses and individual users will have a chance to choose how they want to exploit it; companies can decide to use ChatGPT to give workers more abilities—or to simply cut jobs and trim costs.
Another positive development: there is at least some momentum behind open-source projects in generative AI, which could break Big Tech’s grip on the models. Notably, last year more than a thousand international researchers collaborated on a large language model called Bloom that can create text in languages such as French, Spanish, and Arabic. And if Coyle and others are right, increased public funding for AI research could help change the course of future breakthroughs.
Stanford's Brynjolfsson refuses to say he’s optimistic about how it will play out. Still, his enthusiasm for the technology these days is clear. “We can have one of the best decades ever if we use the technology in the right direction,” he says. “But it’s not inevitable.”
Deep Dive
Artificial intelligence
AI can now create a replica of your personality
A two-hour interview is enough to accurately capture your values and preferences, according to new research from Stanford and Google DeepMind.
This AI-generated version of Minecraft may represent the future of real-time video generation
The game was created from clips and keyboard inputs alone, as a demo for real-time interactive video generation.
These AI Minecraft characters did weirdly human stuff all on their own
Hundreds of LLM-powered AI agents spontaneously made friends, invented jobs, and spread religion.
Google’s new Project Astra could be generative AI’s killer app
Google just launched a ton of new products—including Gemini 2.0, which could power a new world of agents. And we got a first look.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.