Dr. Hilton quit Google - dangers of AI
Whenever technology threatens humans (automation), there has been some counter-balance to ensure slow transition. Unfortunately for AI, this has not happened and I think Hilton quitting Google is a watershed moment for AI history. We need to wake up and listen.
Prominent scientists like Leo Szilard, Joseph Rotblat, James Franck and even Niels Bohr refused to work on the atomic bomb due to moral disagreement about building the bomb. They clearly understood the dangers of the atomic bomb and did not want to be part of developing it.
Now we have the first big name who is opposing the trend of constructing AI systems without understanding the complications of releasing these models into public.
The key difference between the Manhattan project (atomic bomb) and these AI systems is that, in the first case, everyone knew what they were doing and had a good idea about the implications. Now, most people in AI haven't thought through the issues and we are releasing these models.
LinkedIn Top Artificial Intelligence (AI) Voice | Public speaker | CTO AI ♠️ Head of Capgemini AI Lab | Vice President
1yAgree that this is a serious, albeit not the first, great name trying to convey the need to "consider the consequences". Weizenbaum, Hawkins and others have already been out in the public space on the matter, without too much impact (yet?). And it is not completely silent this time, luckily, with international governments and other bodies reacting. It does not seem an easy task (what is actually the problem) and many opinions roar from heavily against to "I do not know what you are afraid for". But if one of the creators raises his voice, it might be worth listening at the least.