#BigIdeas2018 - The Artificial Intelligence Odyssey

#BigIdeas2018 - The Artificial Intelligence Odyssey

In Stanley Kubrick’s seminal sci-fi opus, 2001: A Space Odyssey, it was but one year into the nascent millennium when supercomputer HAL flatly proclaimed, “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.” Kubrick’s forecast of cognitively sophisticated computers realigning the fundamental relationship between man and machine may have been 17 years ahead of schedule, but in 2018 the emergence of artificial intelligence (AI) is poised for a tipping point that will solidify the prescience of his 1968 film.

So what exactly is artificial intelligence, and do we have to immanently worry about computers growing smarter than humans, and ultimately turning against us as HAL eventually does? (Sorry kids, no "Spoiler Alerts" for a 50 year old film.) Artificial intelligence, or cognitive computing, is loosely defined as the performance by a machine of tasks that generally require complex analysis and the ability to learn from previous interactions without additional human input.

It’s not as intimidating as it may sound. In fact, if you’ve had access to high speed internet for the better part of the last two decades, you’ve probably engaged with AI already. Those uncannily accurate Netflix recommendations? Artificial intelligence; the AI system parsed the breakneck pace at which you burned though Breaking Bad over the holidays, and the frequency with which you return to the first three Die Hard films, sussed out an affinity for buzzcut-sporting anti-heroes, and determined The Transporter would probably be right up your ally. Your Roomba’s sixth sense for finding the corners and crevices of your basement in most desperate need to dusting? That’s automated reasoning, a core function of AI. Your smartphone auto-predict’s propensity to complete every word beginning with an “f” and a “u” with two very particular consonants? It’s predictive analytics, based on previous inputs. And maybe your Galaxy should have come with a roll of Charmin for that potty mouth of yours.

While such early iterations of AI were often gimmicky in nature, focused largely on wowing users with predictive razzle-dazzle, 2018 is poised to bring a more holistic integration of AI into the core backend facets of business operations. At the center of corporate AI functionality are two distinct “smart” functions: machine learning and deep learning. Machine learning refers to the ability of a computer to evolve in its functionality and operational sophistication without being reprogrammed. The capability of AI systems to grow through interaction - recognizing and adapting to patterns with greater speed and accuracy than human analysts - reduces the need for coders burn countless man hours hand-coding software to facilitate each update. Deep learning represents the next evolution of automated cognition, in which a machine’s architecture essentially mimics that of the neural networks of the human brain. Such sophisticated processing allows for identification of complex patterns through the white noise of input imperfections like missing details or extraneous data. While machine learning requires the programming of intelligence into an AI system, with deep learning, the capacity for high level reason is latent within the program itself, waiting to be unleashed with an infusion of data. 

The connective tissue between machine learning and deep learning is the independent function of technology systems outside the parameters set by human stewardship. Danny Lange, who has designed learning systems for Amazon and Uber summed it up in a recent interview with Fast Company. “The key message is, you have a learning system, and that’s the disruption,” Lange explains. “Your computer can do more than it’s told to do because it gets the data and it learns from it, and the loop makes it improve endlessly.”

“Your computer can do more than it’s told to do because it gets the data and it learns from it, and the loop makes it improve endlessly.”

The potential of AI to revolutionize the way business and society operates is as scary as it is exciting. A 2013 Oxford University study ominously predicted that nearly half of existing jobs could be computerized within twenty years. While automation is certainly nothing new, white collar workers have observed passed incarnations with a detached remove, generally viewing human obsolescence as an occupational hazard of manual laborers displaced by machines that could move more widgets per hour down an assembly line. With it’s ability to perform in depth analytical tasks at superhuman speeds, AI looms like a restless storm cloud directly over the cushy cubicles of the Dockers and button-up class: analysts, accounts, customer services reps, paralegals, and clinical researchers to name a few. Even anesthesiologists appear vulnerable with the recent FDA approval of Johnson & Johnson’s Sedasys system, which automatically doses and delivers anesthesia for a variety of standard medical procedures. A single doctor can oversee multiple machines simultaneously, potentially minimiizing the number of anesthesiologists many hospitals and surgical centers would need to keep on staff.

But despite the foreboding possibilities, the scope of early AI implementations indicates it might be premature to add cubicle jockeys to the corporate endangered species list just yet. A recent survey of over 800 businesses across four continents from Tata Consultancy Services found that AI is being implemented primarily to improve computer-to-computer activities rather than to automate human tasks. The most common usage of AI among respondents was the detection and containment of cyber threats.

“We doubt AI is automating the jobs of IT security people out of existence," Tata’s Satya Ramaswamy explains in the Harvard Business Review. “In fact, we find it is helping severely overloaded IT professionals deal with geometrically increasing hacking attempts.” Potentially, the growing presence of AI to automatically track impending hacks could free up IT security teams to focus on long term strategy, better preparing their companies to meet threats of the future.

Likewise, companies implementing AI into customer success environments are generally doing so to free up human representatives to provide a higher level of individualized service, with rote tasks being off-loaded to machines. “Today AI has become better than humans at the things we humans hate to do,” Digital Genius founder Dan Patterson told TechRepublic. “The repetitive tasks, for example in contact centers, like tagging, and classifying, and routing and searching for repetitive answers usually take a human anywhere from 40 to 60 percent of their time. That’s time that can be unlocked to focus on actually helping the customer, and providing a more genuine customer service experience.”

“Today AI has become better than humans at the things we humans hate to do.”

For other functions, AI remains a work in progress. Platforms driven by public interaction remove human programming at their own peril. Voice systems like Siri and Alexa are not true AI. All of their responses amount to little more than mellifluously voiced playback of human-penned scripts, and will likely remain that way for the foreseeable future. Given that human facing AI systems, by definition, learn from each human interaction, they can only get as “smart” as the humans they are engaging with. Given the prevailing tenor of many digital interactions, the potential to breed a virulently anti-social AI bot is way too high for the brand management teams at corporations like Apple and Amazon to stomach.

To test the “conversational understanding” of its AI technologies, Microsoft introduced Tay, an experimental chatbot on Twitter. “The more you chat with Tay the smarter it gets, learning to engage people through casual and playful conversation,” the company announced. Microsoft soon learned what every Twitter newbie discovers in short shrift: one person’s "playful conversation" is another’s hate speech. Quickly realizing that Tay would parrot comments from others, mischievous Tweeters inundated the innocent bot with all manner of profanity, racial epithets, and misogynistic invective.

Within hours of logging on, Tay had gone from exulting, “humans are super cool” to proclaiming unprompted, “ricky gervais learned totalitarianism from adolph hitler, the inventor of atheism.” Tay’s “intelligence” is impressive in that it synthesized “information” served up in several different Tweets to formulate its hot take about the star of the BBC’s The Office. Yet for all its intelligence, Tay was ultimately turned into a blithering idiot by an abundance of interactions with misinformed or wickedly subversive Tweeters, as absolutely none of the statements in the Gervais Tweet bare any resemblance to truth, and could possibly have constituted slander had the comedian felt litigious. Needless to say, we shouldn’t expect Siri to be taken off script any time soon.

While there are numerous kinks to be worked out before artificial intelligence is ready to be deployed universally as an out-of-the-box analytics solution, the possibilities it presents are undeniable. Since the transition to a largely white collar economy, workers have spent untold hours slogging through repetitive computerized tasks, functioning, in effect, as machines. AI offers the possibility of an innovation boom, as smart systems enable computers to assume control of the functions they were designed for, freeing up human workers to work as humans - collaborating, innovating, and engaging with customers and coworkers. Look for 2018 to be the year that cognitive computers party like it’s 2001.

In this series, professionals predict the ideas and trends that will shape 2018. Read the posts here, then write your own (use #BigIdeas2018 in your piece).

About the Author

Jeffrey Harvey is a writer, content strategist, and narrative maven based in Washington, DC. As the writer of the Media Matters blog on LinkedIn, he offers inspired analysis, commentary, and brain droppings on all things communications, contemporary culture, and life in the digital age. He is always eager to connect with other content strategists, communications pros, and dynamic minds from all realms. He periodically lapses into describing himself in the third person, and for that he apologizes profusely.

Jaime Molenez

Multicultural Media Liaison at Cornucopia Communications

7y

Happy new year Jeffrey! Great post. Automation of a higher % of the work force is inevitable as technology advances faster and faster. That's why we need to re-think the public safety net for a future when many more people will be excluded from the work force.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics