To Pause or not to Pause?

To Pause or not to Pause?

The backstory of the Nobel Prize is fairly well known. Alfred Nobel, having invented dynamite for civilian applications and witnessed it additionally used for war, left his fortune to be dispersed as an annual reward for the advancement of mankind. Sixty-eight years later, Stanley Kubrick’s cautionary “what hath we wrought?” satire Dr. Strangelove hit theaters to shine a light on the absurd and terrifying threat of nuclear war. From time to time, humanity achieves a great technical feat and subsequently feels substantial regret.

In an attempt to head off this pattern in the realm of artificial intelligence, an open letter signed by hundreds of luminaries has urged a 6 month pause in the development of AI models more powerful than GPT-4 from OpenAI. The concerns they raise are very much worthy of discussion, though I find myself unconvinced that a pause would accomplish much.

The two forms of pause suggested by the open letter are a public and verifiable self-policing or failing that government regulatory action. The former scenario is problematic as a form of the Prisoner’s Dilemma. If you pause but your competitors don’t, then you’ve put yourself at a severe disadvantage. This is exacerbated by the international and partially open-source nature of this software.

As far as government regulation goes, my skepticism is rooted in part on the inability of the US to enact any number of sensible regulatory frameworks, including common sense digital privacy legislation that would clean up our patchwork of state laws. Even if this weren’t a barrier, development can shift offshore to a jurisdiction less swayed by this sentiment.

The open letter lays out a series of thought-provoking rhetorical questions to drive home their concerns. In assessing their case for a pause, I’ll examine each of these questions below.

“Should we let machines flood our information channels with propaganda and untruth?”

I would quibble with the wording here. Rather than machines, the culprits are people using machines. And it is already happening on a global scale. AI has the capacity to make this problem dramatically worse, and this literally keeps me up at night. That being said. I’d argue the tools for producing and spreading untruth we already have are more than enough for us to spin out of control. I don’t think it’s GPT-5 or GPT-8 that would tip us into chaos.

“Should we automate away all the jobs, including the fulfilling ones?”

Given the level of AI sophistication we’ve already reached I don’t see how a pause would impact this issue. During a pause would we sift through possible AI features and forbid the ones that might result in job market disruption? Who makes that call?

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? / Should we risk loss of control of our civilization?”

These questions rest on the premise that we will reach “the singularity”, where AI breaks free from us and starts making its own decisions beyond our control. I am not convinced this will ever happen, although this is admittedly a minority opinion. In the past several months I have tried to strongly ground myself in not anthropomorphizing AI. In doing so we project our passions, needs and motivations onto software that seems capable of doing what we can do.

But the piece that is missing for me is motivation. Most species are territorial to some extent – their survival depends on control of their environment to sustain themselves and avoid predators. In humans this takes the form of war, megalomania, and conquest. AI does not share this baseline. The “Terminator/Matrix” view of AI treats megalomania and thirst for power as naturally emergent traits that escalate along with intellectual capability. I’m not so sure. This premise aligns machine “evolution” with biological evolution, and I’m not convinced the analogy holds.

In short, I doubt the “dangerous singularity” scenario because to get there we would have to train systems to want dominion over the world, we’d have to do it on purpose, and like any lofty software development goal it would consume a great deal of time and money.

The Way Forward

If we need to pause anything, we ought to be looking harder at the user rather than the tool. Generative AI is reflecting our own culture back at us, from our best selves to our worst selves. That’s what makes it seem as though it “understands” us so well. This can be both exhilarating and scary. Likewise, what humans use these tools for can be wonderful or incredibly dangerous. It’s tempting to blame a future evil computer for an apocalypse, but I’m more worried about how we treat each other today. An AI pause doesn’t address the need for more empathy and less vitriolic division in the world. The subtext of “AI panic” in some ways is genuine concern about ourselves. The call is coming from inside the house. 

#AI #AIpause #Innovation #GPT4 #pause #empathy

Spyro P.

Creative Media Business Professional

1y

OK on the other hand, we are looking at a handful of use cases here many focused on media and communications. Think of the uses outside of those. Medically. In scientific research. In propelling forward every other technology we have like travel, agriculture, manufacturing, clean energy, space exploration, civil engineering, national defense. I get it. I grew up with movies like War Games, The Terminator, 2001, and later the Matrix where AI goes awry but we still have yet to create self thinking tech. What we got is still a giant, complicated series of if/then statements executing at light speed. I understand the risk, but despite what folks like Elon Musk feel, exactly how would anyone enforce a “slow down?”

Heather Duncan Fairman

CEO at DF Guardian Consulting, Inc. | Specializes in FDA Regulatory Compliance, and FSMA matters for the Global Dietary Supplement, Food & Beverage, and Supply Chain Industries |Transformational Leadership Training

1y

Machines, AI, etc., etc., will never outsmart the mind that made it. So, my thought, it’s always the mind of the maker to be concerned about, and less the AI’s ability to “think” beyond the maker’s mind. 🤔

Sam Kressler

I help brands bring delicious, innovative, clean food products to market. 2x NEXTY winner, 1x NEXTY finalist. R+D/Innovation/Trends expert available for public speaking/press/podcasts

1y

Was this written by ChatGPT?

Makes a lot of sense. It is how a human who understands machines thinks about a machine problem. 😀

Rupal Patel

Professor | Entrepreneur | Innovator | Advisor | Investor

1y

Couldn’t agree with you more Andy Maskin

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics