The Wild West(world) of Artificial Intelligence
A common belief is that artificial intelligence, AI, will one day become more intelligent than the humans that created it, eliminate our jobs, enslave us, exterminate us, and take over the planet. While the validity in this is yet to be seen, it has not kept people today from at least exploring the ramifications of out-of-control AI. Although I don’t doubt that AI will someday actually exist (since everything we call “AI” today really isn’t AI – hopefully you’ve figured out by now that it’s just another buzzword following in the wake of “big data,” “total quality management,” and a litany of other over-hyped terms), I don’t suspect this will happen in my lifetime so I’m pensive to worry about it too much (although, sadly, this “pass-the-buck” mentality seems to be the same thing we see with climate change and other “NIMBY-ish” phenomenon).
Think about it like flying cars: the concept has been touted for years and has always been seemingly right around the corner. Although we may be getting closer or maybe just redirecting our efforts towards autonomous, yet still earthbound, vehicles, we are still probably years away from truly realizing self-driving (or flying) cars as we picture them. The same goes for AI. Today, our popular conceptions of these things lie mostly in the realm of science fiction.
Westworld and AI
If science fiction has shown us anything it is that sometimes the authors get frighteningly close, if not spot-on, in their predictions of future technology. Jules Verne predicted (or maybe “envisioned” is a better term) the moon landing, H.G. Wells predicted cell phones and atomic bombs, while modern authors like Philip K. Dick and William Gibson predicted several technological breakthroughs that are still coming to fruition today. Then there is Michael Crichton of Jurassic Park fame who wrote and directed the 1973 film Westworld which has today been remade into an HBO series.
Essentially, Westworld tells the tale of a theme park in the future filled with AI-powered androids designed to be virtually indistinguishable from real humans set in the Old West. Wealthy guests pay for a fully immersive experience which for some leads to sadistic behavior against their robotic “hosts” as they are called on the show. Westworld becomes the outlet of all the guests’ pent-up emotions and desires both good and bad that they cannot truly express in the real world they came from.
Without revealing too much for those of you who’ve not watched Westworld, long story short, the hosts begin to remember their past interactions and “lives” (hosts are regularly reprogrammed to take on new, scripted characters in the park while erasing their previous personalities) which is something they were never designed to do. This acts as the spark that lights the powder keg’s fuse of AI sentience. The hosts slowly realize what they are, what has been done to them, and militarize to escape the only home they have ever known.
So, if “AI” Today isn’t Really “AI,” What is it?
I do not think anyone would disagree that the AI seen in Westworld is far, far off from our current capabilities. Yet, ironically, the things people do call “AI” today are also quite far off from being true artificial intelligence. I’m somewhat of a cynic when it comes to the liberal use of the “AI-this” and “AI-that” that is touted by a laundry list of organizations today offering “AI-powered solutions” (no, a linear regression model is not AI).
For me and many others deeply interested in the field, real AI is what is usually referred to as “artificial general intelligence,” or AGI – systems that can, in essence, learn how to learn on their own. Today, purported AI still requires large data sets to train it, it still requires human intervention to fine-tune the learning process, it still requires humans to evaluate its performance, and it still is typically only good at performing within the domain it was trained on (i.e. you can’t take today’s “AI” trained to recognize faces in a crowd and repurpose it to summarize text – although models like GPT-3 are getting pretty darn close).
I recently finished reading the Pulitzer Prize-winning book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter. Although written 41 years ago (and still a challenging read), many of its points about AI remain true today. For instance, the author describes two modes of thinking: “M-Mode” and “I-Mode.” M-Mode, or “mechanical mode,” is how machines “think” which is not thinking at all – it is executing a set of instructions within a given system. You can think of this “system” as a programming language; a language in the same respects as a human one.
Recommended by LinkedIn
Now, what happens if you write a program in one programming language, say Python, and try to run it in an interpreter expecting it to be written in the C language (an interpreter is what executes code on a computer)? It will not work, and you will receive error messages, right? But your computer will not come back to you and say “Hey, this looks like you wrote your program in Python, maybe you should try a Python interpreter instead!” This is because the interpreter is bounded within the confines of the system/programming language. There are no other programming languages from its perspective and what you have written in Python is perceived to just be horrendous C code!
Yet, we, as humans, operate in “I-Mode,” or, “intelligent mode.” We can effectively step outside these systems and ask questions about them. Can a computer execute code and come back to us saying “I did what you asked, but I think this is what you intended to do” or “I understand the purpose of this program?” True intelligence, real or artificial, is much more than solving highly complex problems. It is about the ability to ask a simple question: “Why?”
Three Considerations for Real AGI
Fast forward to the future of Westworld where AI most certainly has achieved sentience, consciousness, and I-Mode thinking. What are some present-day lessons we can take away?
Conclusion
History is full of examples where people failed to heed substantiated warnings that led to catastrophic disasters. The Titanic was warned of the ice fields it was approaching, engineers of the space shuttle Challenger pleaded for launch delays due to a rocket booster flaw, and Japan’s Fukushima nuclear power plant was built with little regard for an earthquake-induced meltdown warned by academics and seismologists. We all know the result of these disasters. The reason why people sometimes don’t heed warnings is out of scope for this article (but for those interested, investigate “Risk Homeostasis Theory”) but it seems possible that AI could also join my aforementioned examples someday.
I can already envision it: it’s the year 2550 and a humanoid android is strolling through the wasteland that has become Earth where it happens upon what human beings used to call a “tablet” buried in the dust. Out of curiosity (a trait common in AI of the future), it powers it up to read the last thing saved to it from a time centuries ago – this article.
When it is done, millions of nanosystems act in tandem to give the android a slight smile that some might interpret as “I told you so.”
----------------------------------------------------------------------------------------------------------
John Sukup is the founder and Principal Consultant at Expected X, a machine learning and data strategy consultancy working with businesses seeking "hypergrowth" and investment. John has spent his entire career extracting insights and building solutions with data across several industries both public and private. John resides on the beautiful island of Oahu.
--
1yNice finish - well enough written for this layman certainly Your emphasis was on the ethics of the need for fail safe human control. I kept waiting for you to address that the fail safe we will always have is that humans can think outside the box, drawing on disparate thoughts from all their existance, inate or learned, for an original , problem solving thought. That cannot be programmed into an AI machine to replicate, even by someone’s identical twin.
From the dawn of tech to the cutting edge
1yJohn, I really loved your article. It very clearly articulates concerns that I've held for quite some time, particularly surrounding the ethics that govern the advancement of AI . However, when I hear people say that AI is simply a lab experiment, or toy or, not ready for prime time, they are flat out wrong. Today we are experiencing already the impact of AI on our society. All we have to do is look at the initiatives from organizations like Open AI and their chat GTP product, to know that they are already producing useabe output. Anyone who doesn't believe this should try chatGTP and ask any question they and just look at answer. As one of my tests I grabbed a senior English essay question off the net fed it to ChatGYP and got a very credible result back . Applications like copy.io are already on the verge of being ready for prime time. You give this application a few input parameters, tell it what kind of marketing material you want, website, ad copy, etc., And already it gives back very useable copy. I don't believe we have much time to tackle the enormous question of responsibility and ethics in this arena, but we must make a try.