Meta's Search for AI

Meta's Search for AI

This recent position paper by Meta's head of AI, Yann LeCun, seems to be taking our ideas on the theory of AI back to pre-Cartesian times with the search for ever smaller particles of intelligence, the quest for the homunculus at the root of understanding. It seems to me that once again the engineers have missed the point of AI.

I accept that at some level intelligence, and emotions such as love and joy and anger, and indeed the whole of existence, all are subject to the laws of maths and physics. However, I don't believe that it is either novel or helpful to describe intelligence in these terms. I am concerned that the next logical step for LeCun and colleagues might be to declare that intelligence is a quantum phenomenon, and that quantum computing will simply produce AI as an emergent property. I don't believe this.

In my opinion, intelligence has to be something bigger, embracing beliefs, goals, plans attitudes, all of which can be modified in various ways as the result of new information or experience. None of this is captured by Meta's position paper, and indeed some of the statements in that paper seem to be taking us in entirely the wrong direction.

On page 5, for example, LeCun asks the question "can a human or animal brain contain all the world models that are necessary for survival?" It is difficult to imagine how our civilization could have reached its present state if the answer were not "Yes". LeCun seems almost to be arguing that intelligence does not in fact exist and all behaviour is merely stimulus-response and random chance, a valid philosophical view but not one which is likely to advance the development of AI.

On pages 17-18, regarding the development of machine vision (one of the simpler problems for AI to solve, and one where a great deal of essentially rule-based manipulation of the training data seems to be accepted without question), LeCun optimistically suggest that in his approach "Once the notion of depth has been learned, it would become simple for the system to identify occlusion edges, as well as the collective motion of regions belonging to a rigid object. An implicit representation of 3D objects may spontaneously emerge." Or not, presumably, which would be embarrassing and inconvenient, and leave the AI blindly groping to understand the world - a situation in which many humans find themselves, and with which our natural intelligence copes remarkably well.

On page 46, in his conclusions, LeCun falls prey to the assumptions of the engineer and emphasises why this approach is as flawed as current ML and connectionist attempts to produce AI. He states without argument or evidence that "Large Language Models (LLMs), and more generally, large-scale transformer architectures trained with a form of generative self-supervised learning, have been astonishingly successful at capturing knowledge present in text." No. No they haven't. This cannot be refuted often enough or loudly enough. There is no "knowledge" captured by these LLMs and NNs and MLPs and whatever other implementations of machine learning are currently available.

I will explain why I say this, and what my argument is, briefly. Stating that LLMs have captured knowledge, and pointing their behaviour as evidence, is no different from stating that God, or fate, or I myself by magic, or a race of space aliens in orbit around our planet, has saved the lives or caused the deaths of the survivors or the casualties in some natural disaster. Yes, it is true that some people died and some people survived, just as it is true that some text was produced and some was not by any LLM in any situation - but this is just a statistical phenomenon, it is not evidence of some deliberate agency. Claiming that LLMs are "capturing knowledge" takes us right back to Searle and his Chinese Room: it is clear that the agent in the Chinese room has no understanding of the messages they translate, derives no "knowledge" of Chinese, but is simply following a procedure. There is a huge different, to my mind, between the "knowledge" and "intelligence in Searle's Chinese Room example, even if you take the entire room as a system akin to an LLM, and the intelligence and knowledge of a human translator. Put simply, for any statement translated by a human translator, we could ask them if they agree with the statement and receive an intelligent answer.

To conclude, LeCun has done a very good job of drawing together the threads of current ML and of proposing a near-optimal way of using these to advance the applications of ML, but he has not shown any way to implement understanding, intelligence, or true AI. This approach may be able to extend the impressive simulation shown by LLMs and other ML applications into a model which is more able to interact with the real world, to train itself, to react to more complex stimuli (or not, as I point out above). As LeCun himself acknowledges, this is not a radical new direction. As he does not acknowledge, it does not promise a step change in AI - that would require some level of actual understanding, perhaps even of sentience or consciousness which we are no nearer to defining. Or maybe it's all just quantum.

To view or add a comment, sign in

More articles by Alex Monaghan

  • Your Agents are Heading for the Hills - What Can You Do?

    Your Agents are Heading for the Hills - What Can You Do?

    It's Christmas, and "The Sound of Music" is bound to make an appearance, so to paraphrase one famous quotation I'm…

    1 Comment
  • America as Entertainment

    America as Entertainment

    Series 1 of Biden has disappointed a lot of people, and hasn't delivered on the broad appeal that was promised. Does…

  • Ten Days with Empirix

    Ten Days with Empirix

    Well, two weeks into the job and it's been a rollercoaster - thrills aplenty, but no spills so far! One little…

  • You Will be Judged!

    You Will be Judged!

    LinkedIn seems to be looking less like Facebook these days - or maybe Facebook has moved on - but I was disappointed to…

    3 Comments
  • Why don't we look at other countries?

    Why don't we look at other countries?

    The UK has cut itself off from the rest of the world in many ways - and apparently that's OK because we've always been…

  • Bringing out the worst in employees with AI

    Bringing out the worst in employees with AI

    I'm seeing adverts on multiple forums (fora would be pedantic) for AI "solutions" to improve employee efficiency. Most…

  • Customer Service Fail - Why I won't go back to BT

    Customer Service Fail - Why I won't go back to BT

    As a business, BT are probably not significantly better or worse than others in their field, and I'm sure they have…

    1 Comment
  • Is data destroying individuality?

    Is data destroying individuality?

    The use of big data to understand customer trends, to guide product offerings, and even to manage customer…

  • All I want is a room somewhere ...

    All I want is a room somewhere ...

    This article from McKinsey is interesting, but it misses an important aspect of office life - what do the employees…

  • Is there a Linguist in the House?

    Is there a Linguist in the House?

    I recently attended the 2020 virtual meeting of ACL SIGDIAL, the Special Interest Group on Discourse and Dialogue for…

    6 Comments

Insights from the community

Others also viewed

Explore topics