Regulation, Ethics, and the Maturation of AI
What will it take for AI to truly go mainstream? (Image by Midjourney)

Regulation, Ethics, and the Maturation of AI

Emerging technologies are always described as appearing suddenly and developing rapidly. In reality, the most important ones operate very differently. If you're looking, you can see them coming from a mile away. And they take years, sometimes decades, to reach mass adoption and influence.

At the turn of the millennium, it was clear that mobile connectivity and what eventually became known as social media and digital platforms were the future of the internet; we just weren't sure how and when they would develop, or which platforms would win. The two key technologies to come out of the 2010s, modern AI and blockchain, are similar. Blockchain is now in one of its periodic troughs of disappointment. So while there is more still happening in crypto than most realize, I won't discuss it in this post. AI, by contrast, is taking off, both in the popular consciousness and business interest.

It took ChatGPT mere weeks to reach 100 million users, and a year to be a topic of conversation for every board and VC. Yet those timelines are misleading. Even the overnight sensation of GPT-4 was released seven years after its progenitor, OpenAI, was founded. The deep learning revolution that sparked serious corporate AI adoption started half a decade before that. And AI has been a recognized field of computer science for well over half a century. This doesn't mean AI development has been predictable. The focal point of AI development in the 2010s was supervised machine learning. The sudden jump in generative AI performance that ChatGPT highlighted was a surprise to most AI researchers. While such step-function technological changes are quite difficult to anticipate, they are blindingly obvious when they occur.

The key to grappling with transformative technologies is to evaluate when and how their technical capabilities will translate into real-world impacts. And that requires a different skillset. The influences of technologies on business and society are driven by their maturity even more than their raw capabilities.

When I say "maturity," I mean how well the technology fits into familiar patterns -- business processes, legal arrangements, social conventions, and group behaviors. When does it move from something scary and unfamiliar, mostly adopted by the most adventurous segments, into a comfortable mass phenomenon? By definition, "normal" people and firms will usually change their behavior when doing so becomes the conventional thing to do. Individual habits and social morés do change, but it takes time.

There are several dimensions of technological maturation. One of the least understood is the legal, regulatory, and ethical dimension. We've been so acculturated to view regulation and innovation as polar opposites that we rarely appreciate when regulability is a critical innovation accelerant. Yet where adoption of new technologies is a leap of faith, nothing is more important than trust. As imperfect as they are, the formal structures of law and regulation, along with the principled adoption of responsible practices, are essential foundations for trust.

With AI, we are witnessing a rare scenario where the conversation about ethics and regulation is occurring just as adoption takes off. In many cases, efforts to regulate new technologies do start too early, often thanks to fears stoked by incumbents. Sometimes, as in the case of social media, it happens too late. For AI, the relevant concerns, such as algorithmic bias, lack of transparency, high-profile catastrophic failures like autonomous vehicle crashes, privacy violations, intellectual property complaints, and manipulation, are quite well-established. The essential next step is for the responses to become equally solid.

Three important developments are happening almost simultaneously, and worldwide. First, enforcement by regulators. Actions such as the Department of Justice suit that forced Facebook to change its entire system for housing-related advertisements, and the Federal Trade Commission's settlement with Rite Aid over poorly-implemented facial recognition, are taking place under the authority of current laws. Second, there is an explosion of new legislation, including the European Union's AI Act, New York City's Local Law 144, China's Generative AI law, and a plethora of proposals in the US Congress, imposing new regimes for the distinctive attributes of AI. While largely not having the force of law, the Biden Administration's AI Executive Order may be nearly as impactful, through the leverage of federal procurements and agency initiatives. And third, companies are starting to move from ad hoc AI ethics initiatives to more systematic efforts, supported by AI governance frameworks such as the National Institute of Standards and Technology's AI Risk Management Framework.

It's one thing to recognize that your algorithms may be inaccurate, biased, opaque, privacy-invading, and otherwise ethically troublesome; it's something else entirely to implement what I call Accountable AI: well-developed processes that incorporate technical mitigations, legal compliance, operational practices, and assurance mechanisms like AI audits.

Much could still go wrong. Poorly-drawn regulation or ill-considered judicial decisions can well be a tax on innovation, especially by the smaller competitors most likely to shake up markets for the better. Or perhaps these initial steps toward maturity will falter too soon. Predicting the exact path of regulatory and ethical development for emerging technologies over a longer time scale is almost as hard as predicting their technical evolution. What's important is knowing what to focus on in the moment.

We are experiencing a crucial regulatory inflection point. Successfully navigating this transition will allow AI's incredible potential for social good and economic benefits to be realized. It's time for AI to grow up.


Justin Ried

Digital Marketing at Qualcomm

11mo

Fantastic article, Kevin. Thank you for sharing.

Yassine Fatihi 🟪

Crafting Audits, Process, Automations that Generate ⏳+💸| FULL REMOTE Only | Founder & Tech Creative | 30+ Companies Guided

11mo

Can't wait to read your thoughts on #AccountableAI! 🚀

Camberley Bates

Where tech meets people and everything in between

11mo

The unknown is a scary place for most people. And fear shuts down our brain for rationale thinking. Then add in how fast we are moving with the technology, social media expletives and confusion on how the generative AI works, we have a entire world concerned. Personal (including business) responsibility is needed if we are to avoid a government takeover the the "sake of others." We need the voices of the community to speak clearly, and often on how they are approaching AI with #accountability and #ethically. Having regulation enter into AI at this stage will hinder growth, especially in smaller startups, which are often the harbingers of incredible ideas and innovations. Kevin Werbach for your article. Keep up the posts!

Joshua H.

Program Manager | Ethical Leader | Team Advocate

11mo

Great article.

Dr. Kelly Coulter

Digital Asset Specialist | Policy and Regulation | Fintech | Scholar | Author | Advisor *all views expressed are my own*

11mo

Great post Kevin Werbach

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics