The Wild West(world) of Artificial Intelligence

The Wild West(world) of Artificial Intelligence

A common belief is that artificial intelligence, AI, will one day become more intelligent than the humans that created it, eliminate our jobs, enslave us, exterminate us, and take over the planet. While the validity in this is yet to be seen, it has not kept people today from at least exploring the ramifications of out-of-control AI. Although I don’t doubt that AI will someday actually exist (since everything we call “AI” today really isn’t AI – hopefully you’ve figured out by now that it’s just another buzzword following in the wake of “big data,” “total quality management,” and a litany of other over-hyped terms), I don’t suspect this will happen in my lifetime so I’m pensive to worry about it too much (although, sadly, this “pass-the-buck” mentality seems to be the same thing we see with climate change and other “NIMBY-ish” phenomenon).

Think about it like flying cars: the concept has been touted for years and has always been seemingly right around the corner. Although we may be getting closer or maybe just redirecting our efforts towards autonomous, yet still earthbound, vehicles, we are still probably years away from truly realizing self-driving (or flying) cars as we picture them. The same goes for AI. Today, our popular conceptions of these things lie mostly in the realm of science fiction.

No alt text provided for this image

Westworld and AI

If science fiction has shown us anything it is that sometimes the authors get frighteningly close, if not spot-on, in their predictions of future technology. Jules Verne predicted (or maybe “envisioned” is a better term) the moon landing, H.G. Wells predicted cell phones and atomic bombs, while modern authors like Philip K. Dick and William Gibson predicted several technological breakthroughs that are still coming to fruition today. Then there is Michael Crichton of Jurassic Park fame who wrote and directed the 1973 film Westworld which has today been remade into an HBO series.

Essentially, Westworld tells the tale of a theme park in the future filled with AI-powered androids designed to be virtually indistinguishable from real humans set in the Old West. Wealthy guests pay for a fully immersive experience which for some leads to sadistic behavior against their robotic “hosts” as they are called on the show. Westworld becomes the outlet of all the guests’ pent-up emotions and desires both good and bad that they cannot truly express in the real world they came from.

Without revealing too much for those of you who’ve not watched Westworld, long story short, the hosts begin to remember their past interactions and “lives” (hosts are regularly reprogrammed to take on new, scripted characters in the park while erasing their previous personalities) which is something they were never designed to do. This acts as the spark that lights the powder keg’s fuse of AI sentience. The hosts slowly realize what they are, what has been done to them, and militarize to escape the only home they have ever known.

No alt text provided for this image

So, if “AI” Today isn’t Really “AI,” What is it?

I do not think anyone would disagree that the AI seen in Westworld is far, far off from our current capabilities. Yet, ironically, the things people do call “AI” today are also quite far off from being true artificial intelligence. I’m somewhat of a cynic when it comes to the liberal use of the “AI-this” and “AI-that” that is touted by a laundry list of organizations today offering “AI-powered solutions” (no, a linear regression model is not AI).

For me and many others deeply interested in the field, real AI is what is usually referred to as “artificial general intelligence,” or AGI – systems that can, in essence, learn how to learn on their own. Today, purported AI still requires large data sets to train it, it still requires human intervention to fine-tune the learning process, it still requires humans to evaluate its performance, and it still is typically only good at performing within the domain it was trained on (i.e. you can’t take today’s “AI” trained to recognize faces in a crowd and repurpose it to summarize text – although models like GPT-3 are getting pretty darn close).

I recently finished reading the Pulitzer Prize-winning book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter. Although written 41 years ago (and still a challenging read), many of its points about AI remain true today. For instance, the author describes two modes of thinking: “M-Mode” and “I-Mode.” M-Mode, or “mechanical mode,” is how machines “think” which is not thinking at all – it is executing a set of instructions within a given system. You can think of this “system” as a programming language; a language in the same respects as a human one.

No alt text provided for this image


Now, what happens if you write a program in one programming language, say Python, and try to run it in an interpreter expecting it to be written in the C language (an interpreter is what executes code on a computer)? It will not work, and you will receive error messages, right? But your computer will not come back to you and say “Hey, this looks like you wrote your program in Python, maybe you should try a Python interpreter instead!” This is because the interpreter is bounded within the confines of the system/programming language. There are no other programming languages from its perspective and what you have written in Python is perceived to just be horrendous C code!

Yet, we, as humans, operate in “I-Mode,” or, “intelligent mode.” We can effectively step outside these systems and ask questions about them. Can a computer execute code and come back to us saying “I did what you asked, but I think this is what you intended to do” or “I understand the purpose of this program?” True intelligence, real or artificial, is much more than solving highly complex problems. It is about the ability to ask a simple question: “Why?”

No alt text provided for this image

Three Considerations for Real AGI

Fast forward to the future of Westworld where AI most certainly has achieved sentience, consciousness, and I-Mode thinking. What are some present-day lessons we can take away?

  1. AI “Kill Switch”Westworld’s Maeve, a brothel-queen-turned-samurai, has a unique ability that plays a key part in the development of the AI-consciousness concept: she can control/recode her fellow androids to do her bidding and essentially “wake them up” to the reality that they are not human and their entire lives have been scripted playacting. This power has turned more than a few of her fellow Westworld denizens into her allies against their human overlords. This is not an ability given to her purposefully as early in the show, Westworld technicians can shut down most hosts by simply speaking coded phrases to them. As the show progresses, Maeve seems unstoppable until the end of the most recent season where (SPOILER ALERT!) she is instantly disabled by some sort of “kill switch.” This offers a lesson: human AI engineers should always remain in control with a failsafe. Frankly, we should not allow AI to become as or more intelligent than us. We need controls that lie outside the system and inaccessible to AI that keeps us firmly in control.
  2. AI Ethics – Even today we hear about the unintended outcomes of adverse selection from predictive models. Yet the models themselves and the machine learning algorithms used to “train” them (operating completely in “M-Mode,” mind you) are hardly to blame. We are to blame. The data needed to create such models were created from human actions recorded in the past and therefore the models are merely reflections of underlying human biases. This is a regular topic of discussion in today’s AI ethics circles, but as we turn the clock forward, myriad considerations arise. Will AI lead to further wealth-based disparities in healthcare, or in the extreme Westworld example of “digital consciousness preservation” where one can “live” forever (and also the topic of Amazon’s Upload series)? Do AI-powered androids have rights? Should AI be allowed to execute decisions on its own or should a human gatekeeper always act as a buffer? Some of these ethical considerations are surfacing today and think-tank groups like my colleague Shilpi Agarwal’s “DataEthics4All” are starting to tackle the tough questions that may not arise today, but certainly will in the future. 
  3. Subjunctive Realities – Sit back, close your eyes, and think about your proverbial “happy place.” If you concentrate long enough all your senses can come into play – the sights, sounds, and smells becoming almost real. In essence, I-Mode lets us create (to borrow another concept from Hofstadter’s book) “subjunctive realities,” or idealized versions of places, events, people, etc. that exist only in our brains and may never materialize in objective reality. Throughout the day, humans are constantly envisioning alternative situations and outcomes both good and bad. The science fiction masters I mentioned earlier were quite gifted in this regard. The point, however, is that the power of the human brain can generate entire new worlds, such as Westworld, long before they come to fruition if they ever do at all. This means humans have the innate ability of foresight into many of the potential problems AI could ravage upon the world and address them before they get to that point. Whether or not we choose to do the right things is a different story.

No alt text provided for this image

Conclusion

History is full of examples where people failed to heed substantiated warnings that led to catastrophic disasters. The Titanic was warned of the ice fields it was approaching, engineers of the space shuttle Challenger pleaded for launch delays due to a rocket booster flaw, and Japan’s Fukushima nuclear power plant was built with little regard for an earthquake-induced meltdown warned by academics and seismologists. We all know the result of these disasters. The reason why people sometimes don’t heed warnings is out of scope for this article (but for those interested, investigate “Risk Homeostasis Theory”) but it seems possible that AI could also join my aforementioned examples someday.

I can already envision it: it’s the year 2550 and a humanoid android is strolling through the wasteland that has become Earth where it happens upon what human beings used to call a “tablet” buried in the dust. Out of curiosity (a trait common in AI of the future), it powers it up to read the last thing saved to it from a time centuries ago – this article.

When it is done, millions of nanosystems act in tandem to give the android a slight smile that some might interpret as “I told you so.”

----------------------------------------------------------------------------------------------------------

John Sukup is the founder and Principal Consultant at Expected X, a machine learning and data strategy consultancy working with businesses seeking "hypergrowth" and investment. John has spent his entire career extracting insights and building solutions with data across several industries both public and private. John resides on the beautiful island of Oahu.

Nice finish - well enough written for this layman certainly      Your emphasis was on the ethics of the  need for fail safe human control.        I kept waiting for you to address that the fail safe we will always have is that humans can think outside the box, drawing on disparate thoughts from all their existance, inate or learned, for an original , problem solving thought.       That cannot be programmed into an AI machine to replicate, even by someone’s identical twin.  

Like
Reply
TJ Leonard

From the dawn of tech to the cutting edge

1y

John, I really loved your article. It very clearly articulates concerns that I've held for quite some time, particularly surrounding the ethics that govern the advancement of AI . However, when I hear people say that AI is simply a lab experiment, or toy or, not ready for prime time, they are flat out wrong. Today we are experiencing already the impact of AI on our society. All we have to do is look at the initiatives from organizations like Open AI and their chat GTP product, to know that they are already producing useabe output. Anyone who doesn't believe this should try chatGTP and ask any question they and just look at answer. As one of my tests I grabbed a senior English essay question off the net fed it to ChatGYP and got a very credible result back . Applications like copy.io are already on the verge of being ready for prime time. You give this application a few input parameters, tell it what kind of marketing material you want, website, ad copy, etc., And already it gives back very useable copy. I don't believe we have much time to tackle the enormous question of responsibility and ethics in this arena, but we must make a try.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics