The AI Act is done. Here’s what will (and won’t) change
Sarah Rogers/MITTR | Getty

The AI Act is done. Here’s what will (and won’t) change

It’s official. After three years, the AI Act, the EU’s new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it. But the reality is that the hard work starts now. In this edition of What’s Next in Tech, understand what will and won’t change once the EU begins enforcing the law.

AI has become mainstream, fundamentally transforming the nature of work for people and the organizations that employ them. Join MIT Technology Review’s editors and reporters for a free LinkedIn Live session about how AI is changing the way we work on March 26.

This is what you need to know now that the AI Act has been approved—from the types of AI uses that will be banned, to a new era of AI transparency.

The EU Act will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law. Here are four changes you can expect to see.

  1. Some AI uses will get banned later this year: The AI Act places restrictions on AI use cases that pose a high risk to people’s fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year. It also bans some uses that are deemed to pose an “unacceptable risk.” They include some pretty out-there and ambiguous use cases, such as AI systems that deploy “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making,” or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the internet à la Clearview AI will also be outlawed. There are some pretty huge caveats, however.

  1. It will be more obvious when you’re interacting with an AI system: Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research around watermarking and content provenance a big boost. 
  2. Citizens can complain if they have been harmed by an AI: The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement. Thanks to the AI Act, citizens in the EU can submit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It’s an important first step toward giving people more agency in an increasingly automated world. 
  3. AI companies will need to be more transparent: Most AI uses will not require compliance with the AI Act. It’s only AI companies developing technologies in “high risk” sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people’s rights.

Read the full story to dive deeper into the implications of each of these changes.

Artificial intelligence, demystified. Sign up for The Algorithm, MIT Technology Review’s weekly AI newsletter, today.

Get ahead with these related stories:

  1. Five things you need to know about the EU’s new AI Act The EU is poised to effectively become the world’s AI police, creating binding rules on transparency, ethics, and more.
  2. Large language models can do jaw-dropping things. But nobody knows exactly why. Figuring out how LLMs work is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
  3. What’s next for AI regulation in 2024?The coming year is going to see the first sweeping AI laws enter into force, with global efforts to hold tech companies accountable.

Image: Sarah Rogers/MITTR | Getty


michal anderson

michalimagines.net Artist Au-Courant ArtForms and ArtWorks, Author"EnCircled by Dreams", What is In-Between Earth, Galaxies... and Dreams"

9mo

Thank you World . Have We got a Great Adventure ahead. By the way, let's agree , as Churchill once quipped "Never give in, never, never, never, in nothing, great or small, large or petty—never give in except to convictions of honor and good sense. Never yield to force; never yield to the apparently overwhelming might of the enemy".

Like
Reply

MIT,no strangers to technology and the Nazdaq/ Wall Street exchanges,,I did two tech refers of Wall Street's tech hubs years ago ,probably they are information stores only these days,but tech was fun in those days,not an endless chore of fighting off app segregation

Like
Reply
Ricardo Velez Dominguez

Innovator | Solution Designer | Coffee Nerd | Master in Customer Experience and Innovation

9mo

So many questions. How will this "scare away" even more AI development companies away to less regulated markets? On the other hand, could a more regulated AI landscape lead to a more attractive market? Is the EU putting itself in a competitive disadvantage? To what extent may it affect productivity for workers vs more unregulated markets? About reporting or flagging AI, dificult to judge where it applies and not, is the Amazon product recommendation algorithm or the Instagram algorithm clasifying as harmful, to some it may be... anyhow, interesting stuff. 🤔

Like
Reply

Finally, we are taking some real steps towards some kind of control on AI, thanks to EU.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics