Should We Fear or Embrace Our AI Future?

Should We Fear or Embrace Our AI Future?

Should we be exalted or mortified by the prospect that Artificial Intelligence (AI) will become an even more dominant force in our lives as soon as the next decade? As more organizations jump on the AI bandwagon, a “shoot first and ask questions later” type of mentality has developed. For them, the question is merely how quickly can they get in the race and become part of the winner’s circle. This is prompting many participants to skip steps they may otherwise have taken in the product development process. In addition, the massive increase in AI-oriented startups and established vendors is confusing consumers, who are increasingly unable to distinguish between one service provider, service, or product and another.

Eventually, an equilibrium will be established between firms offering AI products and services and consumers who purchase them, wherein product and service providers will be clearly differentiated and consumers will become knowledgeable and experienced enough to recognize, and reward, the difference ‒ but we are many years away from that at this juncture.

Ironically, data used to create AI, and resulting from the use of AI, remains one of the most neglected intangible assets not on companies’ balance sheets. Very few companies treat data as a balance sheet asset, either because they do not think of it as an asset or because there is no standard methodology for attributing tangible value to data. As the race toward AI supremacy marches on, this is becoming an ever more important omission. The failure to accurately quantify the enterprise value of data may woefully undervalue not only a firm’s stock value and brand equity but the potential value of its AI-related assets and investments.

Definitions for what constitutes the enterprise value of data, and methodologies intended to calculate it, remain in their infancy. Many existing means of determining the impact of data-related risk on the bottom line–such as evaluating the degree of cyber security in an organization–miss the real potential value of data. Some of the world’s largest companies have recognized this and begun to quantify what brand equity - which includes a valuation for data - is actually worth, by including it as an asset on their balance sheets.

To unlock the hidden value of data, firms should begin to treat data itself as an integral part of their supply chain, because data impacts the entire ecosystem in which a firm operates. The challenge is to attribute value and quantify it in the context of what ‘data equity’ actually means to a firm, so that economic value can be ascribed to this asset class over time.

AI is already a fact of life whose potential will grow exponentially, along with its applicability and impact. Being able to rent cloud space or outsource computational resources means relative costs have come down to earth, and will continue to do so. The widespread use of open source Internet-based tools and the explosive growth in data generation have also made a big difference. So much data is now generated on a daily basis globally that only gigantic infusions of data are likely to make a difference in the growth of the utilization of AI going forward. This implies that only the largest, most technically sophisticated firms, with the capability to consume and process such volumes of data, will benefit from it in a meaningful way in the future.

Governing AI

Some of the greater thinkers of our time are pondering what our AI future may imply. Henry Kissinger sees AI as dealing with ends rather than means, and as being inherently unstable, to the extent that its achievements are, at least in part, shaped by itself. In his view, AI makes strategic judgments about the future, but the algorithms upon which AI is based are mathematical interpretations of observed data that do not explain the underlying reality that produces them. He worries that, by mastering some fields more rapidly and definitively than humans, AI may diminish human competence and the human condition over time, as it turns them into mere data. AI makes judgments regarding an evolving, as-yet-undetermined future, and Kissinger argues that its results are imbued with uncertainty and ambiguity, which leads to unintended results and a danger that AI will misinterpret human instructions.

By achieving intended goals, AI may change human thought processes and human values or be unable to explain the rationale for its conclusions. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition. While the Enlightenment began with philosophical insights being spread by new technology, the period in which we are living is moving in the opposite direction, for it has generated a potentially dominating technology in search of a guiding philosophy.

In a world filled with unintended consequences, what else might be lost along the way? Will our collectively shared values fall by the wayside in an effort to reach AI supremacy? Will we lose our ability to distinguish between victory and a victory worth having–in business as well as on the military battlefield? Will the notion of human accountability eventually disappear in an AI-dominated world? Some military strategists view an AI-laden battlefield as “casualty-free” warfare, since machines will be the ones killing and at risk. Could the commercial AI landscape evolve into a winner takes all arena in which only one firm or machine is left standing?

New forms of threats are evolving as AI becomes more widely utilized, so it is important that we regain agency over it. Just as Microsoft proposed a Digital Geneva Convention that would govern how governments use cyber capabilities against the private sector, an international protocol should be created to govern not only how governments project AI onto one another, but how they do so with the private sector, and how the private sector will do so with itself.

Yet, there are no such “rules of the road” in existence for AI right now. While AI remains in an embryonic state, it would be a perfect time to establish rules, norms, and standards by which AI is created, deployed and utilized. We should ensure that it enhances globally shared collective values to elevate the human condition in the process. While there will probably never be a single set of universal principles governing AI, by trying to understand how to shape the ethics of a machine, we are at the same time forced to think more about our own values and what is, ultimately, truly important.

Attempting to govern AI will not be an easy or pretty process, for there are overlapping frames of reference and many of the sectors in which AI will have the most impact are already heavily regulated. New norms are emerging, but how will the two be merged? It will take a long time to work through the various questions that are being raised in the process. Many are straight forward questions about technology, but many others are about what kind of societies we want to live in and what type of values we wish to adopt in the future. If AI forces us to look ourselves in the mirror and tackle such questions with vigor, transparency, and honesty, then its rise will be doing us a great favor. History would suggest, however, that the things that should really matter will either get lost in translation or be left by the side of the road in the process.

We may see a profound shift in agency away from man and toward machine, wherein decision-making could become increasingly delegated to machines. If so, our ability to implement and enforce the rule of law could prove to be the last guarantor of human dignity and values in an AI-dominated world. Yet, as we continue to grapple with such fundamental issues as equality and gender bias with great difficulty, what should be on the top of the AI “values” pyramid? How can we even know what human compatible AI is or will become?

Some would argue that we are getting ahead of ourselves by imagining a world dominated by AI and worrying about its potential implications, particularly since, in many respects, we are at the beginning of the runway. Much of what has been accomplished in the AI arena remains rudimentary. How excited should we actually be getting by robots that can master repetitive tasks or a robot that can be programmed to jump up and down or answer basic questions? The truth is, we have only just begun to understand how to actually “build” AI. Mostly, what we know how to do is collect and utilize statistics from Big Data. AI cannot be “produced” simply by collecting data, any more than weather may be predicted with complete accuracy.

We might just be approaching a wall in terms of how much further we will advance meaningfully using AI in the near term. Longer term is a completely different proposition. To achieve even close to our potential in scientific discovery will require a much larger effort than teams of people sequestered in a room to think about it all. Only those organizations and countries that commit massive resources toward solving many of AI’s inexorable challenges and creating a competitive edge today have a chance of achieving AI supremacy in the next decade. That implies adopting a mindset that makes AI an integral part of the long-term planning process, with clear objectives and benchmarks in view. That will be much easier said than done, of course, no matter how large the organizations or how committed the government.

Sprinting to a Mythical Finish Line

Those organizations that will ultimately prevail in the race for AI supremacy will master how to use AI technology and Big Data. That means they will not necessarily be the industry leaders in providing data and technology; rather, they will be the leaders in successfully integrating data and technology into their organizations, using data and technology to maximize operational and financial value. Consider the examples of Alphabet, Apple, Amazon, and Facebook. Sure, these firms all have a rather large finger in the technology pie, but they are all exceedingly busy dominating the markets they operate in, and are pushing aggressively into new markets that they hope to one day dominate.

But achieving AI supremacy is about much more than industry domination or sprinting to some mythical, unseen finish line somewhere out there in the mist, for there is no real end to this race. AI will continue to evolve, with amazing new technological advances that will occur with regularity, establishing a perpetual state of “new normal” in the process and redefining what it means to succeed in the AI arena. There can be no single victor, and “victory” will be a relative term, for it will always be a matter of time until another entity surpasses whatever the victor of the moment may have been able to achieve. Any victor’s time in the winner’s circle will, by definition, be short lived.

Numerous organizations and governments will achieve evolutionary states of “supremacy” in this realm, and they will all have certain basic things in common. It is a given that they will have invested heavily in AI and ML–in terms of financial, human, and technological resources. But they will also have adopted a defensive and offensive posture, a strategic planning process that is aligned with AI-oriented objectives, and a willingness to fail along the way, dust themselves off, and get back to it.

You might say that sounds like what any organization must do to stay in the ring, and you would be right. The difference is, only those organizations that have made the decision to do so at the beginning of the AI race stand a chance of competing effectively in it or, indeed, ever achieving AI supremacy. This is not a race that one chooses to enter at any time in the future, for the others who have been at it for years will be too far ahead for the laggards to ever catch up. That is likely already true.

A number of eminent scientists, engineers, thought leaders, and visionaries fear that, once we have succeeded in building an AI smarter than we are, our own demise may be the natural result. Elon Musk has warned against summoning the demon, envisaging an immortal dictator from which we can never escape. Just prior to his death, Stephen Hawking declared that Artificial General Intelligence (AGI) could spell the end of the human race. While such dire predictions are not new (they have been made since AI came into being in the 1950s), it has taken on new meaning as we grow ever closer to making AGI a reality, even though it remains a somewhat distant goal.

 As a result of advances in chip design, processing power, and Big Data hosting, advances in AI’s capabilities have grown so ubiquitous that we rarely notice it. It is simply a matter of fact that Siri schedules our appointments, Facebook tags our photos, and GPS plots our routes for us before we even know where we are going. We may not even notice when something that would have seemed a revolutionary technological advance a decade ago is released as a new product to the general public. We have become so conditioned to expect rapid AI advance that we are disappointed if a new smart phone is released without some major new AI-oriented gadgetry embedded inside it.

So, what is left for humans? Some argue that the relationship between human and machine intelligence should be understood as synergistic rather than competitive. Afterall, AI in large part helps us do what we can do better; working alongside robots augments human productivity. That is, for now, at least. What about when most applications of AI surpass human capability in more areas than it complements it?

The real risk of AGI may stem not from malice, or emergent self-consciousness, but simply from autonomy‒for intelligence entails control. A recursive, self-improving AGI may not be smart like Einstein, but rather in the sense that an average human being is smart compared to a beetle or a worm. No matter how intelligent machines may become, they will lack human emotion and instinct, but instead of seeing this as a disadvantage, perhaps we should be thinking of it as a net plus. AI will presumably not adopt a course of action that it can calculate will fail, while humans may proceed anyway, by letting their emotions guide them or by following their gut instinct.

Perhaps, rather than sprinting toward a mythical finish line, we should slow things down, take a deep breath, and ponder where we, and AI, are going. Most businesses and governments are, after all, really just getting their feet in the starting blocks to begin their version of “the race”. Given all the unknown unknowns, should we not figure out where we are going, and why, first?

That may fly in the face of conventional wisdom and runs counter to business culture in our hyper-competitive world, but, in the end, in Aesop’s Fables, the tortoise surpasses the hare. AI supremacy will not be achieved simply by virtue of devoting necessary resources to the task. Strategy, direction, resources, vision, and application will ultimately determine who gets to stay in the race and who will have their chance to join the winner’s circle.

Daniel Wagner is CEO of Country Risk Solutions and co-author of AI Supremacy.

 

sammy bruggeman

Owner, Bridges and Forrester

7mo

I have to say, It will kill us anyway , so it is better to embrace. Like better to embrace the enemy and work it out to find a great collaboration. What I fear more is that Facebook has so many fake accounts and when you report them, they let them stay to get the high number of users. to be that Billion dollar company. I'm sure that 2/5 of the accounts are fake. That I fear the most. they open up the gates for criminals to get a lot off money of people around the world. But I'm also pretty sure that AI would solve many sickness and many world problems in a tenth of time that Phd's will do in 20 years in 4 months. So embrace it.....and use it like it should. Just an example : AI can make a song in 3 minutes where people need 3 days for. Just use it , and just use the parts that are good and let them inspire you. and complete the song with a human soul. Humans have a soul. and that is the one thing AI would never get. mmmmm.... or maybe it will some time. The more should we put into the NET , the smarter the AI soul gets. but for now, embrace it.....it will make you better. It is like playing tennis. You will not get better if you play against bad tenissers. But if you play against good ones, you will get better very fast !

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics