What AlphaGo’s amazing victory doesn’t tell you about AI in 2016

What AlphaGo’s amazing victory doesn’t tell you about AI in 2016

AlphaGo’s recent defeat of champion Go player Lee Se-dol has spurred another flurry of articles about AI. Among these was an announcement that Luc Besson will be directing a pilot for a new TV series titled Artificial Intelligence. Bet you can’t guess the plot? Well, actually, you probably can: AI escapes the lab, goes AWOL, mayhem ensues. Its creators form a team of special agents to combat the now rogue AI. Sounds like Sci Fi at its formulaic best—advanced technology runs amok is eventually defeated by a ragtag team with good hearts and ingenious hacks.

Fun as it might be to watch, the show is likely to compound current misunderstandings of AI.

Thankfully, a few recent articles have begun to clarify the picture. The New York Times reporter Steve Lohr reminded us that advances in AI proceed in incremental steps rather than dramatic leaps. In TechRepublic, Hope Reese pointed out that  AI is not necessarily synonymous with automation nor does it always (or even often) come in robot form.

And yet, it will take many more clear-eyed articles to counter the TV and movie version of Artificial Intelligence because AI, today, is seamlessly woven into our daily lives. Google relies on AI to power its search results; Facebook relies on AI to find friends in your photos; and, of course, AI gives Siri some of her powers. It’s everywhere and nowhere. As a result, we often don’t realize when AI is shaping, and enhancing, our experience.

Autonomous intelligent agents should begin to change the broader perception of AI. As the technology matures, I anticipate that user demand will shift from a model in which software assists us in completing a given task towards one in which agents do a job in full. Rather than demand that you interact with an app, these agents let you hand a job over completely, and go do other things. Such agents can take many forms. The Google Self-Driving Car Project is one. Our scheduling agent, Amy Ingram, is another. Once such agents are pervasive, users will have a much more concrete sense of AI in action.

Right now, though, there is a very good, if banal, reason that these sorts of autonomous AI agents, ones that can do even a simple job in full, are still rare. They are very, very hard to build. You need to teach a piece of software to work alongside humans, with all the nuance and comprehension of context that humans possess.

Today, the relatively small set of companies building autonomous agents use machine learning in one way or another, and many rely on Supervised Learning. This process requires several core elements: You need training data. You need humans to label training data. You need to model the universe in which your agent will operate. And you need a way to capture and process a massive amount of data to validate and refine your models (algorithms).

For the Self-Driving Car, Google needed to create a simplistic model of the real world in which cars exist (other vehicles, bicycles, pedestrians, pedestrians that are officers of the law, road conditions, signs and their meaning, etc.) and, on top of that, apply all of the elements of driving (accelerate, decelerate, stop, turn left, turn right, back up).

Here at x.ai, we’ve had to model all the elements of a meeting (time, date, participants, location). And lest you think that is an easy task, it took us over a year to find the perfect conceptual model for time alone, and the accompanying data annotation guidelines run 16-pages long.

We may not be geniuses, but we are reasonably smart people over here (Chief Data Scientist Marcos Jimenez was part of the Higgs boson hunting group at CERN). What makes AI hard to build and concepts like time hard to model is that humans do not typically behave in machine legible ways. We are imprecise communicators even when we think we are being clear, in part because we can rely heavily on context. To annotate and model all those subtleties is a laborious and time-consuming task, which requires a good deal of experimentation. (For example, we learned that prepositions that directly modify a time expression are very important if they precede a time expression, but are pretty much irrelevant if they come after a time expression.)

Then too, in creating the models, you immediately confront a “chicken and egg” problem. You need at least some data upon which to build decent models. This means you need to collect data as you are building the models. But how? For some products, you can use an existing data set; however, this isn’t always, or even often, the case.

We needed to create a dataset from scratch, and so we quickly had to  build the data-collecting machinery itself. These systems are temporary but essential. Imagine you are building the New York subway system at the turn of the 20th century. You need to dig a bunch of massive tunnels. And to do this, you first need to custom build the drill, since no such system had yet been built at scale. In 1900, you can’t buy an “off-the-shelf,” subway tunnel drill.

In the case of AI that relies on Supervised Learning, you need to develop the collection mechanism as well as the software used for labeling and verifying data. Google’s Self-Driving Car Project mounted purpose-built sensors (a combination of lasers, radars, and cameras) on an ordinary car. To capture and process the millions of scheduling-related emails required to train Amy, we’ve specially designed an email annotation console.

And then, once you have the mechanism to capture and annotate data, you need to collect a huge amount of it to fully train the system. But how much? That depends somewhat on the level of accuracy you require. To schedule meetings, we need a very high-level of accuracy, otherwise you end up at one Starbucks at 2PM, and your client ends up at another one, three blocks away. For us, this requires millions of scheduling emails. As you can imagine, Google’s Self-Driving car has even less room for error, which is in part why it has taken the project more than six years and over 2 million miles to amass enough data to deploy the car in test settings like Mountain View, California and Austin, Texas.

Finally, once you get the system working, you must automate every element of it. Once we labeled a sufficient volume of data, we programmed the machine to take over this specific task. We have done the same thing with, for example, complex email threading: we manually threaded emails that defied the usual conversational logic, developed a threading algorithm based on that work, and then turned the task over to the machine. This took us more than a year from the first piece of data we collected to putting an active, intelligent threader in place, and we are still honing it.

When Artificial Intelligence, the TV show, arrives on a screen near you, remember this: it has taken us two years to build the AI behind Amy—and we are still perfecting her (and her brother Andrew). Today,  Amy can schedule meetings pretty well, but she can’t get you coffee or join your call. There’s no doubt AI has the potential to transform the way we work and how we are entertained. But AI that is able to override the objectives it was designed for remains a Sci Fi fantasy.

George Wamae

Dynamic Sales Leader | Expert in Operational Leadership, Strategic Planning & Data-Driven Decision Making | Proven Record in Revenue Growth & Team Management

8y

Great Artical,that clarifies a lot. I like it already...cant wait to try it...

Like
Reply
George Wamae

Dynamic Sales Leader | Expert in Operational Leadership, Strategic Planning & Data-Driven Decision Making | Proven Record in Revenue Growth & Team Management

8y

That clarifies a lot....I love it....can't wait to test it.

Like
Reply
Mike Tobias

Student of Business, Philosopher at Heart, and Entrepreneurial Adventurer

8y

Great explanation. Good concrete examples.

Like
Reply
Omar Sharif

Author, Social worker & Magician in Bangladesh 🇧🇩 CEO: Magic Event & Magic Corner, Executive Director: Socio-Economic & Cultural Organization (SECO), Active Member: International Brotherhood of Magicians, Ring-279, USA

8y

So sweet!!! Wish you all the best.............

Like
Reply
Fredrik Olsson

PhD. Tech Lead @ All Ears.

8y

Dennis, do you employ any unsupervised och active learning techniques when labelling data?

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics