Is this the tragic result of an innovator's social-psychological experiment?
The tragic story of how Eliza, an AI chatbot, "encouraged" a family man to take his own life is featured in these two articles:
Despite claims to the contrary, there are numerous reports of damaging outcomes from the use of ChatGPT and Microsoft's AI-powered Bing. Here is one example:
Are these the results of innovators' social-psychological experiments in the race to be the next biggest innovation? Sound familiar?
Humanity should brace itself for well-funded and sponsored generative AI-based innovations that will be used for large-scale disinformation, corruption of moral judgement, manipulation of naïve users, and cyberattacks.
The WinSolutions startup and SME ecosystem supports innovators, entrepreneurs, and investors in responsible AI innovations in health, food and energy spaces.