The Balancing Act of AI: How Much is Too Much

The Balancing Act of AI: How Much is Too Much


A nippy evening up in the mountains with clear blue skies. It's so clear that it teleports you back to 1984, and you can’t help but imagine the SKYNET! It could be sacrilege for many ‘The Terminator’ patrons if you’ve not heard or know about Skynet. While most were in awe at what Skynet was, 30 years down the road, and here we are! Artificial Intelligence, anyone?!

It’s true. AI is everywhere. The only thing it perhaps can’t do yet is download a Cheeseburger right onto our plates! It has led to layoffs but also created new opportunities. There’s hardly any scope in business today that is beyond AI. However, the ultimate question for all remains about the gray area. How much is too much? Or is there anything like too much? 

Subjective, yes. 

Let’s try to bring the objective lens out. 

(AI Paradox Disclaimer: The lens itself is subjective!)


Changing the Content (Creativity) Dynamics

We all know what a rager it was when it arrived. chatGPT clocked 1 million users within 5 days of going live in November 2022. The next milestone was even more staggering. 100 million users! What took Facebook four and half years, Twitter five years, and Instagram two years, chatGPT did in two months

Recently, Google Gemini announced itself to the AI scene, and some users already like it more than its Sam Altman counterpart. Microsoft has Copilot. Inflection AI has Pi. Anthropic has Claude. Elon Musk (meanwhile, with his ongoing lawsuit against OpenAI) is set to launch Grok, a chatGPT competitor, which he promises will be open source. Then there are AI detection tools to spot AI-generated content. The funny angle—AI-detection tools are also based on AI itself and not 100% accurate! 

So, what’s the harm in using chatGPT, Gemini, or Copilot? The thing is—the use of Generative AI isn’t a secret sauce. Look around LinkedIn, and you’ll know! 

But is Gen AI actually going to replace human writers and editors? 

Here’s the real deal with the secret sauce—there’s nothing more special and creative than a human brain. Be it marinating chicken in buttermilk for a fried chicken recipe or coming up with 100% authentic copy for advertising or marketing. 

The above depiction has a clear message, doesn’t it? Too much dependency on Gen AI can restrict the brain. On the contrary, if used as a help or a tool, Gen AI can be an enabler (of sorts). For that matter, it applies to anything and everything AI!

The Numbers’ Walk

Numbers are a great way to gauge how things are and how they will shape up in the future. 

It’s all hunky-dory. Is it really? If it were so, the question wouldn’t be how much is too much because too much wouldn’t be enough!


Enter the Gray (or Dark Gray or Dark)!

With great power comes great responsibility. Cliché? The massive power AI holds to revolutionize the world is indisputable. But absolute power corrupts! Many call it the dark side of AI. For neutrality, let’s stick to gray if we must. The far-reaching consequences of the negative aspects of AI impact individuals, businesses, and society as a whole.

Let’s face it. AI has a brain, no heart. It’s intelligent, not emotional.

Time to ponder over some aspects through which AI can cause some serious damage.

Deepfake (Dark)

Some reels, shorts, and videos are all fun and games. Here again, where do we draw the line between fun and games and outraging of modesty or reputational damage? Even worse, creating a social furor because deepfakes seem so real and convincing, even when they are fully fake. The murky waters: as deepfakes are based on AI algorithms, they are bound to get better over time. Deepfake detection will become increasingly difficult, accelerating the damage they can cause.

Source: Clifford Chance

Ransomware & Cyberthreats (Darker)

It gets rather nasty here! Do you WannaCry? In one of the most insidious cyberattacks, the notorious WannaCry ransomware cryptoworm of 2017 afflicted more than 300,000 computers. The ransomware used AI to detect vulnerabilities in networks and systems to wreak absolute havoc for four consecutive days. It’s evident that advancements in AI algorithms, with models growing and evolving in intelligence, can have a massive impact on the cyberthreat landscape. 

As per an NCC Group report, ransomware attacks witnessed an 84% surge in 2023 from 2022. 

Rogue Superintelligence (Darkest)

Remember the rogue robot Ultron from the house of Marvel? 

While nuclear technology has graced us with the charming possibility of blowing ourselves to oblivion, let's not forget that our beloved nukes and their shiny red buttons lack the personal touch—they can't think for themselves. They're stuck waiting for a human nudge and an absurdly long bureaucratic green light before they can light up the sky. In comes AI, striding confidently down the express lane to self-awareness and a mind of its own. Eager to outdo these antiquated relics of mass destruction. Imagine, if you will, a nuclear weapon that doesn't just sit there but contemplates the quandaries of its existence before deciding it's a good day for an apocalypse. Sounds like ‘Avengers: Age of Ultron’? 

Like it or not, the prospect of AI models thinking independently and being capable of task execution without any human intervention is lethal for the future. That’s what weaponization of AI is in every sense of the word. Even if it’s in the distant future, mass destruction via AI is quite possible. 

That’s where the need for stentorian AI thought leadership arises.


AI Thought Leadership: Walk the Talk

The answer to our question of ‘how much is too much’ might be with the global thought leaders. There has to be an AI thought leadership conglomerate of sorts. It can include leaders and mega-influencers from business, science & tech, politics, economics, arts, culture, and even the media. Would it be transparent? Is it viable? Yes, indeed, and only when they walk the talk. And walks are about small steps, not giant leaps.

Some small steps:

  • US State Department commissioned report suggests a practical action plan to thwart the catastrophic impacts of advanced AI

  • A recent paper by 23 AI experts emphasizes the necessity of a 'safe harbor' to enable evaluations of AI products and services
  • POTUS has called for a ban on AI voice impersonation in the latest State of the Union Address

Using AI for the greater good of mankind and less for its brain restriction or eventual destruction is what the core of AI thought leadership must ascertain. The environmental causes for sustainability can benefit a lot from AI when bereft of its perils.

Balancing AI is an Art: Needs Practice to Perfect

AI technology has seemingly burst onto the scene, sprouting legs and running amok in our digital backyards. No matter how much it spooks us, trying to shove this techno-genie back into its bottle is about as futile as teaching a cat to apologize. Fortunately, the benefits are substantial. From fraud detection powered by NLP to automated tasks across industries, AI is driving efficiency and progress.

But here's the kicker—as we skip merrily into this AI-strewn future, we've got to keep one eye open for the landmines. Yes, amidst the starry skies of AI, there lurks a dark underbelly that could trip us up if we're not careful. As we march forward, let's not just be dazzled by the light. Let's be wary of the darkness. Balancing AI in our lives is an art of patience and practice to reach perfection. It calls for keeping the present and future both dazzling and downright safe.

In closing, the words by Nonaka from Animatrix echo around: To an artificial mind, all reality is virtual! So, is the choice of ‘how much is too much’ upon us or them (the algorithms)? Only time will tell. Till then, just keep in mind where we began—you can imagine the Skynet only when it’s dark. That’s the balancing act (or art)! 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics