Geoff Hinton: AI Nobel laureate 2024

Geoff Hinton: AI Nobel laureate 2024

Geoffrey Hinton with his colleague John Hopfield  have won the second Nobel Prize awarded in Artificial Intelligence. The other, earlier example was Herbert Simon.

Although I never met Geoffrey Hinton, our paths ran in very similar directions. We both started out in the 1970’s studying how vision works and both our journeys took us from Cambridge to Edinburgh.

Hinton worked on how to recognise things in images using relaxation labelling while my own research was studying how we were able to see in 3D. From today’s perspective it is hard to appreciate how technically difficult it was to get images into computers in the first place and to then have access to sufficient computing power to do things with them. Doing anything useful was a hard slog with such slow computers.

Hinton moved on and through his involvement with experimental psychologists became increasingly interested in how biological bundles of networked ‘wires’ in brains were able to adapt and learn on the basis of examples perceived through our senses.

While Hinton was thinking about parallel networks of neurons, back in Scotland, my colleagues and I were also working on machine learning. Both Hinton and I were both statistically Bayesian but used very different approaches to solving problems.

Throughout the 1980’s the largest group in the world involved in machine learning was at the Turing Institute in Glasgow where around 100 researchers were focused around an approach to machine learning called rule induction. Our work was directed by a focus on transparency so that the computers could give an explanation of what they did and why. It helped us win contracts in areas where those interested in health and safety demanded to understand why the machines worked the way they did. The Space Shuttle auto-lander, Satellite controllers, military software and systems in nuclear power stations all demanded transparency. We were a magnet for the justification of why things worked the way they did. We built the systems and they worked but the effort involved was enormous and time consuming. Meanwhile, Hinton’s neural networks simply operated as a black box. For him, all that was important at that time was that they worked and that they were quick and efficient. It turned out that being simple and computationally effective was the key; Hinton was right and we were wrong. Occam’s Razor won out.

When ChatGPT burst onto the tech scene back in November 2022 even those who had worked in the area all our lives were taken aback by the startlingly brilliant performance of such a remarkably ‘dumb’ programme backed by a tsunami of raw compute horsepower. It took a while to appreciate that what was smart wasn’t the machine but rather the environment of examples that fed the machine. This was Geoff Hinton’s single and most powerful insight. It was the seed that spawned the massive industry that AI has become and it is why Geoff Hinton is such a worthy winner of the Nobel Prize.

I do find it intriguing that back in the 1980’s we were obsessed by the need for machine learned systems to explain and justify their behaviour while Hinton was simply focussed on brute performance. Today we appear to have switched sides. My own company uses AI tech in every conceivable way to improve the performance of ecommerce (now the largest commercial application area of digital where all that is important is raw performance) while Geoff Hinton is busy worrying about the lack of transparency in what he views as the monster that he has helped create.

Dr Peter Mowforth, CEO, INDEZ

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics