The Urgency for AI Self-Regulation: Learning from Crypto's Mistakes

The Urgency for AI Self-Regulation: Learning from Crypto's Mistakes

In the rapidly evolving world of artificial intelligence (AI), there’s one glaring issue the industry seems to be ignoring: regulation. Or, more specifically, the lack of self-regulation. If the AI industry doesn't wake up soon, it will find itself facing the same fate as the crypto industry—fraught with regulations that stifle innovation and push companies out of the country.

The parallels between AI and cryptocurrency are striking. The crypto space, full of unfulfilled promises, suffered because of its resistance to self-regulation. Instead of setting their own guidelines and holding themselves accountable, the crypto world allowed scammers, fraudsters, and negligent players to thrive unchecked. I can quote hundreds of cases of fraud and scams, from Ponzi schemes to outright theft. In response, regulators have cracked down, hard. The result? Companies are leaving the U.S. in droves, looking for more lenient jurisdictions.

If we don’t want the same fate for AI, the industry needs to change course immediately. We cannot afford to repeat crypto’s mistakes.

The Lesson from Crypto

Crypto’s lack of self-regulation is a cautionary tale. When you leave the doors open for bad actors, they rush in. The crypto world failed to set its own rules and regulations, allowing fraud to proliferate, leading to a barrage of lawsuits and government crackdowns.

Today, the AI industry is on the same path. We’re moving too fast, without enough thought about the long-term implications. We’re rushing to put out the next groundbreaking product, but in doing so, we’re risking the entire future of the industry. If we keep blazing ahead without any regard for ethical or safety standards, we’re practically inviting government agencies to step in. And we all know what happens when that occurs—just look at crypto.

A Case Study in Self-Regulation: Apple’s App Store

The AI industry should take a page out of Apple’s book. From day one, Apple has embraced self-regulation through its App Store. They set clear guidelines for developers. They put teams in place to ensure those guidelines were followed. They had security measures that protected the platform from malicious software and updates. And they developed a fraud team specifically to monitor transactions, ensuring no laundering or other illegal activity took place.

I spent a lot of time working with various government authorities, here and abroad. I spent time with the Australian government discussing how we reviewed apps. I went to Brasilia to tour the government ministries where they conducted review of books and movies coming into Brazil. The list goes on, but the answer was always the same from Apple: We will self regulate so you don't have to. And we showed these authorities time and time again that we knew what we were doing.

Apple’s self-regulation model worked. It allowed them to create an ecosystem that consumers and developers trusted. By proactively setting their own rules, they avoided the need for excessive external regulation. It’s time for AI companies to do the same.

While we all can say that the guidelines are overly stringent these days, and the "Apple Percentage" is too high, but regulators don't try to get involved in reviewing apps any more. They trust that Apple self regulates.

What AI Needs to Do

The AI industry is sitting on a goldmine of potential, but if we continue to operate with reckless abandon, we risk losing it all. To prevent this, the industry must:

  1. Create Self-Regulatory Guidelines: Every AI company should be required to adopt a set of ethical standards, focusing on transparency, safety, and accountability. These guidelines should cover data privacy, bias, and security protocols.
  2. Form Compliance Teams: Like Apple’s app review team, AI companies need internal teams that ensure every product or service adheres to these self-regulatory guidelines. This will show governments and regulators that the industry can be trusted to police itself.
  3. Monitor for Fraud and Misuse: Fraud prevention can’t be an afterthought. Companies must invest in systems and teams to monitor AI products for misuse, manipulation, or unethical outcomes.
  4. Encourage Collaboration Across the Industry: Just as regulators in crypto came down hard because of the industry's fragmented approach, AI needs unity. AI leaders must collaborate on a collective code of conduct for all companies to follow, ensuring consistency and reliability across the sector.
  5. Support Industry Measures to Stop Deepfakes: One of the most pressing ethical concerns in the AI space is the rise of deepfakes, which have the potential to undermine trust in media, politics, and personal identity. The industry must take a firm stance against this misuse of technology. This can be achieved through robust detection tools, identity verification mechanisms, and partnerships with platforms to eliminate harmful content. If the AI industry does not address the deepfake issue internally, governments will be forced to step in with strict regulations, potentially limiting innovation across the sector.

The Consequences of Inaction

If the AI industry continues to neglect self-regulation, the consequences will be severe. Just as the crypto world is now mired in lawsuits, regulatory scrutiny, and capital flight, AI will face similar challenges. Once government regulation steps in, it often overshoots, stifling innovation, and chasing the industry out of the country. This doesn’t just hurt companies; it hurts consumers, too.

The AI industry must take control of its own future. If we can proactively build trust with regulators, we’ll avoid the draconian measures that are looming. The window for action is small, but the benefits of self-regulation will last for generations. Let’s not make the same mistake that crypto did.

Michael Tsapenko

Your nearshore software partner

3mo

Phillip, your content is always so relevant, thanks for sharing!

Like
Reply

This is a good read, Phillip. Nothing much worse than repeating past mistakes. Important that the financial industry takes note of what worked and what did not.

Like
Reply

Absolutely! Self-regulation in the #AI industry is key to avoiding the pitfalls faced by #crypto. Let’s stay proactive and lead with integrity! Tips: Instead of using generic AI-generated images, opt for high-quality mockups from a designer. Ensure the images are not overly busy and that the text appears sharp.

Like
Reply
Paul Ajose

Content Writer and Technical Writer| written dozens of articles on the following | Blockchain | Web 3 | Decentralized Technologies | Data Privacy | Artificial Intelligence | Verifiable Credentials | Zero-knowledge Proof|

4mo

This is a beautiful piece Phillip Shoemaker. This is a must-read for all lovers of AI, especially startup founders and industry leaders in AI.

To view or add a comment, sign in

More articles by Phillip Shoemaker

Insights from the community

Others also viewed

Explore topics