AI Regulations Are Coming. Could They Put Your Business at Risk?

AI Regulations Are Coming. Could They Put Your Business at Risk?

As AI-based tools and solutions become increasingly mainstream, governments around the world are grappling with how to regulate them. The European Union recently passed the EU AI Act, which it bills as the “first comprehensive regulation on AI by a major regulator” anywhere in the world. In the US, the Biden administration followed suit with an executive order designed to promote a safer, more secure approach to AI. While the efficacy of these regulations remains subject to debate, they represent a necessary first step toward putting some much-needed guardrails around AI development. With AI adoption soaring in nearly every industry, the “wild west” approach can’t last forever.

That said, regulations tend to lag behind innovation, and the pace of AI innovation is borderline unprecedented. With the technology evolving so quickly, it can be difficult for organizations to anticipate what regulations may look like in the future—and uneven regulations in different jurisdictions can artificially hamper innovation based on little more than geographic location. Even worse, these differing regulations can create unforeseen risks—after all, cybercriminals are also using AI, and they won’t be deterred by pesky regulations. If regulators aren’t careful, they could create a situation where attackers have access to tools that security teams are forbidden to use—and that’s a risky proposition.

 

The Scope and Impact of Current Regulations

Europe has led the way when it comes to AI regulations, and they have generally approached them through the lens of privacy. This isn’t particularly surprising: the EU’s General Data Protection Regulation (GDPR) was the first major data privacy legislation in the world, creating a blueprint that many others are now following. In fact, while GDPR doesn’t specifically mention AI, it does have an impact on the use and development of these solutions. If sensitive or protected data is used to train an AI model, that could be a real problem for the developer. AI models can’t “un-learn” data—which means if an AI model is trained on data that violates GDPR, the entire model likely needs to be scrapped. Of course, there are other considerations as well: organizations also need to consider how user data is stored and protected, as well as how risky the applications of AI solutions are considered to be.

As a result, we are already seeing the mass deployment of AI solutions slow in certain regions as companies work to determine which AI capabilities might run afoul of regulators. For example, Apple’s consumer-facing AI solution, Apple Intelligence, has been available to US iPhone users for some time. But the feature won’t be available to users in the EU for another six months as the company works to ensure compliance with the EU Digital Markets Act and other legislation. Even when it does launch, Apple promises EU customers “many” of the core features of Apple Intelligence—signaling that others will likely be considered too risky to include within Europe’s stricter regulatory landscape. And Apple is hardly alone—right now, businesses simply aren’t sure what the rules are or how the EU plans to enforce them.

This is having a noticeable effect. First, it’s making organizations less likely to invest in AI-based technology within certain regions. That makes sense: why invest in a solution if you might not be permitted to use it in the near future? But it’s also having a deleterious effect on businesses themselves. If regulations are preventing European businesses from gaining access to the most advanced AI tools, they may not be able to analyze data as quickly or effectively as their competitors in other countries. In today’s increasingly data-focused world, that could create a real competitive disadvantage for businesses operating in Europe or other highly regulated areas. It’s certainly something to keep an eye on.

 

Changing the Paradigm to Put Security in the Spotlight

Beyond the business impact of AI regulations, there are major cybersecurity implications. In areas where AI is more tightly restricted, regulators risk unintentionally tipping the balance of power in favor of attackers. Organizations in these jurisdictions may have a difficult time determining whether their solutions violate AI regulations—and if that happens, they may lose access to cutting edge security tools. It’s hard to overstate the potential scale of this problem. In today’s threat environment, AI solutions aren’t just “nice to have.” They’re essential. You can bank on the fact that attackers will continue to leverage AI in every way possible, and to fight AI tools security teams need AI tools of their own. Without them, organizations risk becoming easy prey, especially for well-funded nation-state attackers.

The point here isn’t that regulations are bad. They’re not! For a generational technology like AI that has the potential to upend life as we know it, some level of regulatory oversight is necessary and even beneficial. (Someone has to help rein in our more reckless impulses, after all.) No, the point is that there needs to be a balance. Without regulations, we risk AI development running out of control, trampling privacy rights and ethical considerations in the pursuit of profit. But with too many regulations, we risk not just strangling innovation, but handing attackers a loaded gun with nary a bulletproof vest in sight. Neither option is acceptable, but achieving a balance between the two has, thus far, proven to be a challenge.

So, what can we do about it? We can start by changing the paradigm. It’s important to make sure regulators know AI isn’t just about increasing efficiency or generating profit. AI plays an increasingly critical role in protecting organizations from today’s advanced threats, and placing excessive restrictions on its use doesn’t just place businesses at a competitive disadvantage—it risks giving attackers easy access to personal information, financial details, user credentials, and a mountain of other sensitive data. At a time when data privacy is being emphasized across nearly every jurisdiction, that’s a compelling argument. It’s important for us to start talking about not just the business applications of AI—but the increasingly critical role it plays in security, too.

To view or add a comment, sign in

More articles by Rob T. Lee

Insights from the community

Others also viewed

Explore topics