The era of AI Powered Misinformation
ChatGPT is a conversational AI designed to answer questions and respond to queries in text form in a way that sounds natural and human. While the technology itself is not novel, a lot of credit goes to OpenAI for making natural language processing accessible and interactive to the general public for the first time, sparking an incredible interest from those who had never interacted with a similar technology before. The success of ChatGPT definitely shifted the conversation around language models in an unprecedented way.
The potential implications for the future of search are real, with Microsoft jumping on the opportunity to finally gain a competitive edge over Google (which holds a 93% of the global search engine market share). That forced Google to unveil the partial release of BARD, a LaMBDA based conversational neural language model that I recall Google presenting during the 2021 Google I/O keynote. Google is a few years ahead with their advanced dialogue system compared to OpenAI and they can use their search engine to get the information that the model needs to generate responses. So why haven't they launched BARD before?
The answer (and the real threat) lies in Google's business model, heavily reliant on search advertising revenues. Why would people click on ads if Google provided the perfect answer to every question without the help from third parties? This is precisely what is going to make it extremely difficult for Google not to throw the baby out with the bath water when drawing a clear line in the race for conversational AI supremacy. How much would it cost Google to feature BARD on search result pages and drive clicks away from sponsored ads? On the other hand, how much would it cost to do nothing and potentially see an erosion in market share? Google’s strategic response will really tell us how much of an existential threat this really is.
An additional complication is that, after two decades demanding high standards of brand safety and truthfulness of information from content producers, Google will have the burden to do the same with BARD and that’s a hard task in itself (as we saw in the BARD demo). Chatbot AIs are known to mimic human speech from the internet, including fake news, racist and sexist language as well as containing a worrisome level of informational inaccuracy. That is something OpenAI doesn't seem to be overly concerned about at this stage. To the contrary, from a reputational perspective Google cannot afford to make mistakes.
Recommended by LinkedIn
While I am excited to see what the future holds, the speed at which ChatGPT has gained popularity is deeply concerning to me. Google has always been very (at times too much so) thoughtful when deploying technology that can have serious implications on our society. This time, the pressure from Microsoft/OpenAI is likely to accelerate the adoption of conversational AI ahead of product readiness and in absence of a legal framework that is more needed than ever.
Make no mistakes, I have never been a conspirationist and the reality is that the only reason why ChatGPT can even be called AI in the first place is that it can simulate human behavior. It is not sentient and it is not thinking for itself. The model works with the dataset it was fed with and in essence, it was trained by humans who pretended to ask questions and provide responses. That’s pretty much it.
However, I am afraid that the greed in the pursuit to search engine supremacy will overshadow vital ethical and legal questions that lay at the heart of our information system. Personally, I want to know who's writing the content that I consume, the unconscious/conscious bias and the underlying set of values used to train the conversational models and who’s accountable for sharing potentially inaccurate or blatantly false information. There is no doubt that Conversational AI technology will generate millions of jobs in the years to come and will make our life easier and more productive in so many ways. The question is, at what price will that happen?