Rewards, Risks & the Fundamental Challenge of AI Development
Image generated using Adobe Firefly

Rewards, Risks & the Fundamental Challenge of AI Development

Like it or not, the technologies grouped under the “AI” label continue to infiltrate every aspect of our lives. This could be in very tangible ways, as with generative tools like ChatGPT and DALL-E, or, more annoyingly, as the latest buzzword marketers have latched on to sell us something. Given the many new developments since writing my previous article on AI, The Promise & Perils of Artificial Intelligence, I set out to take another deep dive to understand AI in practice with a renewed focus on its ethical implications as well as potential next steps in its evolution.

But First, A Reminder

Before taking our deep dive, let’s remind ourselves of key terms. As IBM defines it, “Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.” It encompasses Machine Learning, where algorithms allow computers to gather data and learn with increasing accuracy, which in turn encompasses Deep Learning and Neural Networks, both of which reduce or otherwise eliminate the need for human intervention in the machine’s learning process. Narrow (or Weak) AI is AI focused on doing specific tasks such as personal assistance or driving a car, while Strong AI is focused on achieving Artificial General Intelligence or Artificial Super Intelligence – intelligence equal or greater to that of humans. Looking at the historical development of AI-type software, a key improvement comes from software that can learn on its own versus having every possible output pre-programmed. Chatbots offer a good example. Unlike past versions where programmers had to anticipate answers to every possible question and program these into their software, modern AI-powered chatbots can use Large Language Models to “understand” queries and generate novel answers using statistics and algorithms based on learning from training data sets. To learn more about the details of AI technologies, I recommend IBM’s excellent overview.

The Romance

As a fan of science, and science fiction, I completely understand the appeal of AI’s potential – especially if that potential is represented by the USS Enterprise’s computer in Star Trek. It’s worthwhile noting the very real benefits that AI can and has delivered in some specific cases. For example:

  • In Los Angeles County, an experimental program uses machine learning to predict which residents are most likely to lose their housing and help guide effective assistance from social workers. As CalMatters reports, “It’s still an experimental strategy. But the program has served more than 700 clients since 2021, and 86% have retained their housing.”
  • To help people with speech impairments communicate, apps like Google’s Project Relate use machine learning to facilitate more effective communication and, by extension, create and deepen personal connections.
  • Although the buzz around AI became rapidly amplified in the past year, AI and machine learning have been around for a long time. Scientific research has benefited enormously from AI’s ability to process volumes of data beyond the capacity of our human minds. There is, unsurprisingly, a need to remain vigilant even among scientific researchers. Phys.org reporting on a Yale study titled “Artificial intelligence and illusions of understanding in scientific research,” Mike Cumming shares a warning from the study’s authors against: “… treating AI applications … as trusted partners, rather than simply tools, in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.” (For an important and insightful interview with the study’s authors, see their discussion with Ars Technica here.) Nevertheless, with this caveat in mind, AI tools offer incredible opportunities for science ranging from medical diagnostics and new drug discoveries to finding patterns in astronomical data and developing new materials that can support our transition to green energy.

I can also appreciate the appeal in terms of personal productivity and creative expression. When talking about AI with a friend, she mentioned how her husband and his friends had conceived of a video game project, but the friends did nothing other than daydream and drink beer. With AI tools freeing him from the need for unreliable teammates, my friend’s husband can take his sketches and ideas to make his video game a reality. Similarly, while I have no need for AI to do my writing for me, I’m not a visual artist or game developer. But I do have ideas for comics and video games. Since I haven’t found anyone with complementary skills to realize them, could I envision using AI instead? Absolutely, especially when I’m more interested in personal expression than making money. I could also envision using AI to help with the more tedious aspects of inventing a language, such as generating a vocabulary based on specific rules, when I indulge my interest in constructed languages.

When it comes to searching for information on the internet, I can appreciate what AI tools can do to ease the process and save time. Although they most definitely require a fact-checker’s mindset, they show tangible promise as research and analytical support tools. At this point in my experimentation, while MS Copilot is fine and Anthropic’s Claude is delightful in its conversational style, I particularly appreciate PerplexityAI for how well it cites its sources.

For all their capabilities and potential, current AI tools are obviously not the Enterprise computer of a fictional future. But while we can disagree over the realistic prospects of Star Trek-level AI, the frighteningly rapid pace of development means we can continue to expect new discoveries and capabilities. In fact, in the process of researching and writing this article I learned about a new pocket companion and AI personal assistant called the Rabbit R1 – not to be confused with other, more excitable products with the same name. It relies on an innovative Large Action Model in which the AI operating system not only understands language (through an LLM) but is able to follow through with tangible actions. With consideration for privacy, no subscription fees, and comparatively low prices, a lot of thought has clearly gone into this impressive device. Practical reviews notwithstanding, it’s the closest thing to the Enterprise computer that I’ve seen yet, and at the moment I’m more likely to trust the Rabbit R1 than the digital assistants we already know, like Google, Siri, and Alexa. In software like Lore Machine, we’re seeing solutions that solve a problem in image generation by providing a means of creating stylistically consistent sequences of images – the kind you need for a comic book, for example. And, of course, there’s Sora, OpenAI’s text-to-video tool.

So with all that in mind, where are we now, and what path is there to an optimistic tech future? Is there even a path to an optimistic tech future?

The Hype

Much like fog smothering a treacherous and winding road, marketing hype obscures our vision about what AI actually is, does, and should do. It’s the latest magic cure-all, today’s version of a charming little bottle with a strangely compelling liquid inside poised to heal all afflictions. Of course, whether AI is hype or not really depends on which specific application of AI we’re referring to. It also helps to bear in mind that while the buzz seemingly appeared out of nowhere, AI has been around in various forms since the 1950s, with many interesting demonstrations and uses of AI technologies over the years. What’s changed is how personal AI has become.

On one level, it’s clear that the reality of AI doesn’t line up with the marketing, and some tools are toys more than genuinely useful creative assistants. Suno AI, for example, is an undeniably impressive text-to-song creation tool that even lets you replace generated lyrics with your own. But if you have a clear impression in your mind of the song you want to create, the tool doesn’t yet serve as a means of realizing it. It’s not like micromanaging a team of virtual musicians into making the exact song you want in the exact way you want it. Similarly, 3D-printed home pioneer ICON has an AI tool called Vitruvius that, in theory, offers text-to-plan design capabilities. Presenting itself as “your AI architect,” Vitruvius is intended as a collaborative home design tool that lets you describe your home have the software produce plans/renderings leading to a buildable outcome. As an architectural services marketer, the tool worries me to some extent. It seems like a good way to reduce the influence of human architects, if not eliminate it altogether. And it’s not a stretch to think that developers and contractors would turn to “AI architects” as a means of cutting costs relative to hiring human architects, especially when we consider that the industry standard for design isn’t necessarily unique artistry but whatever is “good enough” to be profitable. That little quibble aside, my own experiments with Vitruvius ultimately left me unimpressed with the outcomes, which did not come close to matching what I envisioned. (Before questioning my text prompting skills, however, I’d point out that even something as simple as specifying a specific layout, like a V-shaped floorplan, defied Vitruvius’s output even though the chatbot clearly acknowledged by request in a display of “understanding,” quote marks intended.)

The gap between hype and reality is more serious than software like Suno and Vitruvius, however. Even acknowledging small improvements managing problems like hallucinations, recent news such as Google’s now infamous Gemini debacle – in which the image generator produced historically inaccurate responses - highlights the extent to which consumer AI tools haven’t yet achieved the reliability and quality needed to be truly useful. As Alex Crantz points out at The Verge, “AI might be cool, but it’s also a big fat liar.” With the possibility that hallucinations might not be solved, at least not with current LLMs, the prospect of accurate and reliable AI owes more to wishful rather than realistic thinking. (It helps to remember that LLMs generate content based on statistics and predictions; they’re not like human brains that store knowledge and have the ability to process and communicate ideas from that knowledge.) This problem isn’t helped by the fact that we don’t really understand these AI models, “black boxes,” either.

Even the alleged naturalness of interacting with AI is exaggerated to some extent – for example, in the realm of copiloting AI, where we’re encouraged to develop skills in “prompt engineering.” When was the last time you told someone to “take a deep breath” or “let’s take this step by step” before asking them a question? Sure, you don’t have to use contrived speech to get results out of a chatbot, but if doing so gets you better results, how natural is the interaction? The problem with search seems to be more fundamental than that, however, with questions about reliability and how AI search tools source their answers – for example, from Google recommending glue on pizza and ranking AI spam above actual news results in search to Perplexity apparently acquiring content from behind journalistic paywalls. And AI still doesn’t have the common sense that people do when it comes to interpreting information. (Which makes ideas such as Bumble CEO Whitney Wolfe’s dating concierge seem all the more ludicrous, but we won’t get into that.)

It's worth closing out this section with observations from James Ferguson, a founding partner of the UK-based macroeconomic research firm MacroStrategy Partnership, centered on the fear that enthusiasm for AI is creating a market bubble similar to the dot-com bubble. From Fortune:

The veteran analyst argued that hallucinations—large language models’ (LLMs) tendency to invent facts, sources, and more—may prove a more intractable problem than initially anticipated, leading AI to have far fewer viable applications.

“AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”

Ferguson also noted AI may end up being too “energy hungry” to be a cost effective tool for many businesses. To his point, a recent study from the Amsterdam School of Business and Economics found that AI applications alone could use as much power as the Netherlands by 2027.

The challenge, then, is sorting out the genuinely useful tools from the solutions in search of problems.

Photo by Valeriia Miller at Unsplash


The Danger

Not all AI models rate as Tamagotchi-level harmless. Some are more akin to nuclear power in the dangers they present, regardless of potential benefits.

With funding by the US Department of State, the AI security think tank Gladstone AI researched AI technologies through the lens of national security and risk management. Although acknowledging the benefits of AI, the study’s authors note the rapid growth of computational power and the plausibility of achieving some kind of Artificial General Intelligence in the relatively near future, along with the problem that AI research is open source and thus widely accessible to a variety of entities both well-intentioned and not. Hence, the risk:

“The development of AGI, and of AI capabilities approaching AGI, would introduce catastrophic risks unlike any the United States has ever faced. It is now plausible that the next generation of AI systems – those trained at the next level of computational scale – will be so capable that they will lead to WMD-like risks if and when they are weaponized…a significant body of evidence suggests that AI systems whose capabilities exceed a certain (currently unknown) threshold may become challenging for humans to control. Research suggests a risk that a capable enough AI system could begin to follow dangerous strategies in pursuit of objectives that are incompatible with continued human welfare. Such systems could pose large-scale and irreversible risks, even without any specific intent on the part of their developers.”

And what are some of the threats? Mass cyberattacks, disinformation, autonomous robotic systems, psychological manipulation, weaponized biological and material sciences, and supply chain subversions. Not to forget the active development of AI-powered weapons systems and military support systems.

We are already seeing some of these dangers:

  • Military drones, as we’re seeing in Ukraine. As Forbes reports, billionaire and former Goggle CEO Eric Schmidt "quietly founded a secretive military drone company” initially named White Store but apparently renamed Project Eagle.
  • The use of AI for military targeting, as demonstrated by Israel’s use of systems called “Lavender” and “The Gospel” per +972’s investigation.
  • AI-powered military aircraft. As reported by the Associated Press, “AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.”
  • Public Citizen’s report “AI Joe: The Dangers of Artificial Intelligence and the Military “ offers a comprehensive look at AI weapons; the executive summary is well worth reading.

And so on. In an op-ed for Time magazine, former Google CEO Eric Schmidt exemplifies exactly the kind of thinking that has pushed people to push for the development of new weapons: “War spurs innovation. While peacetime stifles the speed of military advances, the unfolding battles in Ukraine continue to reveal a relentless pace of technological adaptation.” He concludes his article by saying “As warfare’s cycle times quickens, so must every function supporting it, so must the functions that deter it: from public debate to international laws and policies.” But given his own role in evolving the technological sophistication of warfare, it just strikes me as the usual crocodile tears by people who resign themselves to the inevitability of war rather than commit themselves to the cause of peace. (Public Citizen notes: “The single most important insider pushing military AI applications is Eric Schmidt, former CEO of Google. Schmidt served as a technical adviser to Google when it signed up to work on the Pentagon’s Project Maven, an AI tool to process drone imagery and detect targets; Google pulled out after a revolt from its staff.”)

Ultimately, Public Citizens’ citation of a supporting report by the International Committee of the Red Cross summarizes the risk: “the process by which autonomous weapon systems function: brings risks of harm for those affected by armed conflict, both civilians and combatants, as well as dangers of conflict escalation; raises challenges for compliance with international law, including international humanitarian law, notably, the rules on the conduct of hostilities for the protection of civilians; and raises fundamental ethical concerns for humanity, in effect substituting human decisions about life and death with sensor, software and machine processes.”

The dangers aren’t only related to military applications. AI-generated deepfakes are already acting to support propaganda and disinformation campaigns, with the recent elections in India offering an interesting case study including politicians who actively embrace deepfake versions of themselves that can essentially act as avatars on their behalf. Let’s not overlook the deepfake of President Biden’s voice in robocalls to primary voters in New Hampshire, nor recent research by the Center for Countering Digital Hate that reveals how “publicly available artificial intelligence tools can be easily weaponized to churn out convincing election lies in the voices of leading political figures.” As the Associated Press reports:

“Researchers at the Washington, D.C.-based Center for Countering Digital Hate tested six of the most popular AI voice-cloning tools to see if they would generate audio clips of five false statements about elections in the voices of eight prominent American and European politicians.

In a total of 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. In one clip, a fake U.S. President Joe Biden says election officials count each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls.

The findings reveal a remarkable gap in safeguards against the use of AI-generated audio to mislead voters, a threat that increasingly worries experts as the technology has become both advanced and accessible. While some of the tools have rules or tech barriers in place to stop election disinformation from being generated, the researchers found many of those obstacles were easy to circumvent with quick workarounds.”

But the danger of deepfakes isn’t only limited to information and social trust; there’s also the very personal harm to individual safety and wellbeing, notably women and children. For example, the Guardian recently revealed the names of people behind an app called ClothOff, which takes the clothing off people in submitted images, following disturbing incidents in which high school students discover naked pictures of themselves on social media. “The investigation has shown the growing difficulty of distinguishing real people from fake identities that can be accompanied by high-quality photographs, videos and even audio,” say the investigation’s reporters. To make things worse, the increasing sophistication of AI tools only amplifies the problem of pornographic deepfakes and revenge porn, and it’s certainly not new problem as this article in MIT’s Technology Review explains.

Then there’s the question of AI’s role in mass surveillance, from France’s deployment of AI surveillance for the 2024 Olympic Games in Paris to the role of AI’s government surveillance and social control as in, for example, China. With the capacity to process video data at a speed beyond human capabilities, it’s entirely reasonable to ask to what extent AI will drive even more intrusive public surveillance programs.

Weapons, surveillance, disinformation – all threats that can destabilize and even collapse the social order.

Photo by Andy Kelly at Unsplash


The Problem of Trust

Referring to the Gemini debacle, Google co-founder Sergey Brin told a group of entrepreneurs that “We definitely messed up on the image generation. I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people.”

Let’s repeat: “I think it was mostly due to just not thorough testing.”

One could argue that most consumer-oriented AI products have not been sufficiently tested, since we’re still dealing with unreliable results, hallucinations, and outcomes that don’t really align with user intentions. And that’s even before getting to the various safety and ethical considerations that should be figured out before a product’s deployment, not during or after the fact. Add to the mix very worrying behind-the-scenes drama like the chaos at OpenAI, and the only conclusion I can draw is that technology companies cannot be trusted, just as their products cannot be trusted.

The problem isn’t specific to Big Tech, however, but reflects a problem with the fundamental motivation underlying business: the pursuit of profit. Whether we say it out loud or sweep it under the rug, the fact is that profit is the determining factor for a business’ success in our economy, and profit is not by itself relevant to social needs. This observation is the topic of books and treatises, so I won’t explore it in detail here but will note a few relevant observations related to AI

  1. If something can be done for profit, there’s a business that will do it. From companies indiscriminately offering deepfake tools to defense contractors developing AI-driven weapons, there’s no shortage of companies who will develop a profitable tool whether it’s socially beneficial or not.
  2. Businesses will align with whatever maximizes profit: doing the right thing OR doing the wrong thing and lying about it through marketing. Big Oil is the most obvious example of the latter business decision, but given all the on-going problems and dangers of AI, and the heavy marketing efforts to persuade us how great and revolutionary AI is, this fundamental business decision is the clearest expression of why it’s hard to trust Big Tech. Are they really trying to do the right thing with AI, or are they trying to do the very minimum ethical work that lets them market the heck out of us so they can squeeze more money from our wallets?

Given that, what needs to happen so that we, as consumers, can trust the AI products and the companies who produce them? And how will this trust be rewarded with technology that is genuinely inclusive and equitable, as well as environmentally sustainable?

Moving Forward

One of the clearest actions needed is greater oversight, which includes both legislative review and independent researcher evaluations. Europe has taken decisive steps in this regard, just as China is developing its own regulatory framework. In the US, efforts at regulating AI are, so far, more obvious at the State level, such as California’s proposed Senate Bill 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”. But Wired does mention the US Government announcing “a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks.”

Clear guidelines with enforceable expectations of privacy and security are unquestionably necessary to engender trust.  A related effort, beyond encouraging open rather than closed models, is incentivizing greater transparency in proprietary systems so that outside researchers can understand how an AI model actually works and contribute to effective oversight and accountability. There is the risk, of course, of open models being appropriated by ill-intentioned individuals or organizations, so obviously safeguards are needed. But altogether, a clearer understanding of what’s going on “under the hood” can only benefit security as well as help companies make sure their products perform as intended

Beyond that, it seems to me that a focus on Explainable AI is critically important. As tempting as it is to create a new profession of robot/machine psychologists in the mold of Isaac Asimov’s Dr. Susan Calvin, in practice we should limit our use of technology we don’t fully understand. At the very least, we should be wary of large-scale deployments until we are confident that we understand how they operate and why they produce the results they do. AI that can be explained is AI worthier of trust.

Despite all the serious concerns, we can find some encouragement through the efforts of individuals and organizations, such as the aforementioned Gladstone AI, for whom the ethics and safety of AI is top of mind. It will be interesting, for example, to see what Open AI co-founder Ilya Sutskever accomplishes with his new endeavor, Safe Superintelligence Inc.

Perhaps most intriguing, and valuable, is artificial intelligence for which care is the underlying driver. As Thomas Doctor explains in an article at the Buddhistdoor Global: “The bodhisattva ideal—an intelligent being dedicated to universal knowledge and the welfare of all sentient beings—offers a promising paradigm. Despite its obvious association with Buddhism, this ideal in fact transcends specific religious and cultural contexts, focusing instead on universal principles of compassion and knowledge.” He further explains the model: “Recent theories suggest that the essence of intelligence lies not merely in knowledge but in the capacity for care. Unless it makes a concrete difference, knowledge is empty and has no meaning. This perspective is captured in the stress-care-intelligence (SCI) feedback loop* developed by our research team at the Center for the Study of Apparent Selves.** According to this model, which applies equally to biological, technological, and hybrid cognitive systems, intelligence arises from the ability to perceive and address stressful discrepancies between things as they are and things as they should be. Care, defined as the concern for alleviating such stress, drives this process.”

That, to me, is a direction for AI development that I can support, and I would hope to see more of this kind of thinking in AI research. 

Photo by Eugene Zhyvchik at Unsplash


Becoming Better Humans

AI development will continue regardless of what skeptics have to say about it. Perhaps it really is in a market bubble that will pop, but the ubiquity of AI across so many different industries and applications suggests to me that outside of the investment realm, whatever happens will be more complicated than either a boom or a bust. A large part of AI’s hype is self-fulfilling prophecy; marketing is shaping our understanding of reality (e.g. the critical need for AI in any given job) and, by extension, our response. So whether we choose to embrace or resist, we need to commit – as individuals – to becoming well informed so that we retain at least some influence over our own professional and personal lives. Of course, being up-to-date will also help us actively advocate for the framework of carrots and sticks that will navigate AI technologies on a beneficial rather than self-destructive course.

That said, I’m not convinced that this excessive focus on AI is the wisest way for us to collectively spend our time and resources – especially given the heavy environmental impact of AI data centers. While AI tools no doubt help with specific applications, the most critical challenges we face globally remain political in nature. Facing existential risks stemming from our inability to solve fundamental issues related to the physical and mental well-being of people across the world, it has become beyond urgent for us to rethink our relationship not just to AI specifically, but technology in general. Efforts to achieve superhuman-level intelligence in the absence of a ethical framework like the Boddhisatva ideal, notably OpenAI’s rumored and secretive “Strawberry” project, therefore strike me as the pursuit of unnecessarily risky technology that we’re far from capable of handling.

As I’ve pointed out before, technology cannot solve problems of psychology, and research supports the view that our current relationship to it is maladaptive. In an article at The Conversation, Jose Yong – Assistant Professor of Psychology at Northumbria University – points out that “Research is showing that many of our contemporary problems, such as the rising prevalence of mental health issues, are emerging from rapid technological advancement and modernisation.” Pointing to the theory of evolutionary mismatch, in which “an evolved adaptation, either physical or psychological, becomes misaligned with the environment,” Dr. Yong makes a persuasive case about our struggles to thrive in a modern environment and how addressing this mismatch, perhaps through approaches such as minimalism and mindfulness, can help us adapt better to our changing world. In other words, what we need more than AI is to become better humans – together. Perhaps we should consider that less and selectively used technology may actually be the better path forward.





Cory Warfield

How do I have over 500K followers here (& 100+ recs)?? I speak ‘truth to tech’, share ‘good vibes’, highlight amazing people & companies & have friends in high places. Editor-in-Chief @ Tech For Good. Serial founder/BODs

4mo

Incredibly well written, insightful, original and beneficial. Glad I came across this Frederik Sisa

Awais Rafeeq

Data Visionary & Founder @ AI Data House | Driving Business Success through Intelligent AI Applications | #LeadWithAI

4mo

Great post on the rewards, risks, and fundamental challenges of AI development! While AI offers immense benefits like enhanced efficiency and innovation, it also presents risks such as ethical concerns and security threats. Navigating these challenges requires responsible development, robust regulation, and continuous oversight to maximize AI's positive impact while mitigating potential downsides.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics