The EU AI Act is finally here. Are you ready for it?

The EU AI Act is finally here. Are you ready for it?

On August 1, the EU's AI law came into force. The world's first comprehensive law on AI sets the course for many companies. 

In this edition, we tell you what you need to know about the new regulation and when to put it into action.


What happened?

After years of deliberation, the EU AI Act has finally been entered into the Official Journal of the European Union – meaning the countdown to enforcement has officially begun. Is your organization ready for it? And if not, what can you do to get there? Let’s get into it:  

How does it affect your company? 

The AI Act is designed to be extraterritorial, meaning that even if you operate outside of the EU, you may still be subject to the AI Act if the output of your AI system is intended to be used within the Union. But, given the lack of comprehensive AI legislation outside of this law, many companies are using this regulation as their north star for ensuring responsible AI.   

What’s the timeline for enforcement? 

In just six months, prohibitions on unacceptable risk systems will go into effect. From there, over the next two years, additional restrictions and obligations will go into effect, with some transition periods in between.  

Though the timeline for enforcement is clear enough, what compliance and enforcement will look like in practice is a different story. To gain a deeper understanding of the nitty-gritty of enforcement details and to ensure that your organization is ready for them, join our upcoming webinar.


Timeline


Your AI 101: What are...?  

General-purpose AI (GPAI) models can competently perform a wide range of distinct tasks that can be integrated into different applications.

GPAI model providers must meet specific regulatory obligations under the AI Act, depending on whether they involve systemic risk or not. Those models trained using a cumulative amount of computing power greater than 10^25 floating point operations (“FLOPs”) are presumed to involve “systemic risk.”


Follow this human

Brenda Leong is a partner at Luminos.Law , a law firm dedicated to developing policies and practices around AI governance. She has a particular interest in the responsible use of biometric data, and shares resources for companies looking to create their own AI governance frameworks as well as regulatory updates – including information about the AI Act!

Brenda Leong

Partner, Luminos.Law (CIPP/US, AIGP)

4mo

Thanks for the shout-out! Love these notes and resources!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics