Dr. Hilton quit Google - dangers of AI

Dr. Hilton quit Google - dangers of AI

Whenever technology threatens humans (automation), there has been some counter-balance to ensure slow transition. Unfortunately for AI, this has not happened and I think Hilton quitting Google is a watershed moment for AI history. We need to wake up and listen.

Prominent scientists like Leo Szilard, Joseph Rotblat, James Franck and even Niels Bohr refused to work on the atomic bomb due to moral disagreement about building the bomb. They clearly understood the dangers of the atomic bomb and did not want to be part of developing it.

Now we have the first big name who is opposing the trend of constructing AI systems without understanding the complications of releasing these models into public.

The key difference between the Manhattan project (atomic bomb) and these AI systems is that, in the first case, everyone knew what they were doing and had a good idea about the implications. Now, most people in AI haven't thought through the issues and we are releasing these models.

Robert (Dr Bob) Engels

LinkedIn Top Artificial Intelligence (AI) Voice | Public speaker | CTO AI ♠️ Head of Capgemini AI Lab | Vice President

1y

Agree that this is a serious, albeit not the first, great name trying to convey the need to "consider the consequences". Weizenbaum, Hawkins and others have already been out in the public space on the matter, without too much impact (yet?). And it is not completely silent this time, luckily, with international governments and other bodies reacting. It does not seem an easy task (what is actually the problem) and many opinions roar from heavily against to "I do not know what you are afraid for". But if one of the creators raises his voice, it might be worth listening at the least.

Like
Reply

To view or add a comment, sign in

More articles by Rajeswaran V (PhD)

  • Scaling laws

    Scaling laws

    A scaling law in deep learning typically takes the form of a power-law relationship, where one variable (e.g.

    1 Comment
  • Copy of GenAI/LLM and productivity

    Copy of GenAI/LLM and productivity

    I will present 3 papers which discuss this from economics point of view. The productivity J-Curve "THE PRODUCTIVITY…

  • Paper clip maximization

    Paper clip maximization

    There is an very interesting thought experiment called "Paper clip maximization" This is a thought experiment by…

  • AI and research

    AI and research

    Microsoft performed a lot of experiments with GPT-4 and released the results in the paper titled "The Impact of Large…

  • Moravec's paradox and CV

    Moravec's paradox and CV

    I want to discuss face recognition and how it fits in with Moravec's paradox. Background Steven Pinker writes "The main…

  • AI robustness

    AI robustness

    When we build AI systems - care should be taken to test its robustness. A decentralized group of safe streets activists…

  • AI for Software Engineering

    AI for Software Engineering

    For corporates, Software Engineering lifecycle is most important. This is most relevant for IT majors on where and how…

  • AI in 2024 - some predictions

    AI in 2024 - some predictions

    There is an old saying "Prediction is very difficult. Especially about the future !".

  • Dangers of over-simplification

    Dangers of over-simplification

    In 2021 Sam Altman wrote an essay "Moore's Law for Everything". It gives some insight into his thinking on how AI will…

  • LLMs and Theory of mind

    LLMs and Theory of mind

    In March when researchers in Stanford published the paper "Theory of Mind Might Have Spontaneously Emerged in Large…

Insights from the community

Others also viewed

Explore topics