6 questions that will dictate the future of generative AI
Selman Design

6 questions that will dictate the future of generative AI

Generative AI took the world by storm in 2023. In many ways the buzz around it right now recalls the early days of the internet: there’s a sense of excitement and expectancy—and a feeling that we’re making it up as we go. The internet brought a lot of good, but it brought a lot of harm, too. Since generative AI was trained on the internet, it inherited some of those same harms. But we can do better. Generative AI’s future—and ours—will be shaped by what we do next. In this edition of What’s Next in Tech, examine a few questions (and predictions) about the future of generative AI.

Coming soon: MIT Technology Review’s annual list of 10 Breakthrough Technologies. Save 25% when you subscribe today and be the first to hear about this year’s exceptional technological advancements in AI, biotech, climate change, computing, and more.

Now that AI is fully in the mainstream, niche concerns have become everyone’s problem. Here are some unresolved questions to bear in mind as we watch the generative-AI revolution unfold.

  1. Question: Will we ever mitigate the bias problem?Bias has become a byword for AI-related harms, for good reason. Real-world data, especially text and images scraped from the internet, is riddled with it, from gender stereotypes to racial discrimination. Models trained on that data encode those biases and then reinforce them wherever they are used. Without new data sets or a new way to train models (both of which could take years of work), the root cause of the bias problem is here to stay. But that hasn’t stopped it from being a hot topic of research. Prediction: Bias will continue to be an inherent feature of most generative AI models. But workarounds and rising awareness could help policymakers address the most obvious examples. 
  2. Question: What misinformation will generative AI make possible?Three of the most viral images of 2023 were photos of the pope wearing a huge Balenciaga puffer jacket, Donald Trump being wrestled to the ground by cops, and an explosion at the Pentagon. All fake; all seen and shared by millions of people. OpenAI has collaborated on research that highlights many potential misuses of its own tech for fake-news campaigns. In a 2023 report it warned that large language models could be used to produce more persuasive propaganda—harder to detect as such—at massive scales. Experts in the US and the EU are already saying that elections are at risk.Prediction: New forms of misuse will continue to surface as use ramps up. There will be a few standout examples, possibly involving electoral manipulation.   
  3. Question: Will doomerism continue to dominate policymaking?Doomerism—the fear that the creation of smart machines could have disastrous, even apocalyptic consequences—has long been an undercurrent in AI. But peak hype, plus a high-profile announcement from AI pioneer Geoffrey Hinton in May that he was now scared of the tech he helped build, brought it to the surface.Few issues in 2023 were as divisive in the AI community: leaders in the field were getting in public spats about it on social media. Some have suggested that (future) AI systems should have safeguards similar to those used for nuclear weapons. Such talk gets people’s attention. But ultimately, it’s hard to understand what’s real and what’s not because we don’t know the incentives of the people ringing the alarms.Prediction: The fearmongering will die down, but the influence on policymakers’ agendas may be felt for some time. Calls to refocus on more immediate harms will continue.

Read the full story for the rest of the questions, including: “How will generative AI change our jobs?”

Artificial intelligence, demystified. Sign up for MIT Technology Review’s weekly AI newsletter, The Algorithm, today.

Get ahead with these related stories:

  1. How existential risk became the biggest meme in AI Who’s afraid of the big bad bots? A lot of people, it seems.
  2. This artist is dominating AI-generated art. And he’s not happy about it. AI art generators are built by scraping images from the internet, often without permission and proper attribution to artists.
  3. ChatGPT is about to revolutionize the economy. We need to decide what that looks like.New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

Image: Selman Design



Ben Delaney

Interim Leadership & Executive Support for Sustainable Nonprofit Organizations

1y

This article reveals a shockingly kean grasp of the obvious. 🙄

Like
Reply
Gustavo Pérez Alvarez

Electrical Engineer - Dr. Specialist in: Electrical Syst. Planning/Elect. protections/Reliability/Opt. maintenance/Renew. Ener. Sourc./Electromagnetic pheno./Nanotech./AI/Machine and Deep learning/Python-R/DS-TensorFlow

1y

TOOL THAT CAN BE USED TO MINIMIZE OR ELIMINATE THESE PROBLEMS. This powerful tool is called Generative Adversarial Networks (GANs). GANs use two neural networks in a competition process: the generator and the discriminator. The generator is responsible for creating data samples, while the discriminator is trained to distinguish between these generated samples and real examples. The interaction between these two networks results in joint learning, in which the generator constantly seeks to improve its abilities to deceive the discriminator. But machine learning is more than just solving discriminative tasks. For example, given a large data set, without labels (unsupervised learning), we may want to learn a model that concisely captures the characteristics of that data. Generative adversarial networks (GANs) are a smart new way to leverage the power of discriminative models to obtain good generative models.

ANNA MARQUEZ

HOTEL COUTOURIER at HOST INTERNATIONAL

1y

👍 👍 👍 🙏

To view or add a comment, sign in

Insights from the community

Explore topics