Machine Intelligence is not Artificial, Part 2
Drawings of Santiago Ramón y Cajal at the Museo Nacional de Ciencias Naturales in Spain

Machine Intelligence is not Artificial, Part 2

(This essay is Part 2 of an ongoing series, Machine Intelligence is not Artificial (MIINA) - MIINA Part 1: (Four Funerals & a Divorce), MIINA Part 2: (Allen Newell's Dissertation), MIINA Part 3: In the Beginning, MIINA Part 4: Cybernetics & Norbert Wiener, MIINA Part 5: The Ratio Club & British Cybernetics, MIINA Part 6: Dartmouth 1956, The Birth of AI and the Balkanization of Machine Intelligence, MIINA Part 7: Interlude)

Artificial intelligence (AI) is only a portion of what is possible with machine intelligence, and it is through a realignment of the neural / cognitive sciences and computer engineering that we will achieve "general" machine intelligence - i.e. Machines Who Think.

I gave an overview of this hypothesis on machine intelligence in the first part of "Machine Intelligence is not Artificial." Here, I would like to take a deeper dive into some historic context relating to my thinking, specifically the doctoral thesis work of AI pioneer Allen Newell, "Information Processing: A New Technique for the Behavioral Sciences" (1957) Carnegie Institute of Technology (now Carnegie Mellon University (CMU)). His was the first doctorate granted for work on AI, a term coined for the 1956 Dartmouth meeting, and probably the first to focus on any sort of machine intelligence.

Allen Newell and his sideburns

Newell along with his advisor, the future Nobel prize winner Herb Simon, were the only participants of that first AI conference at Dartmouth to show up with a working AI program - Logic Theorist, created with Cliff Shaw who worked with Newell at RAND. Logic Theorist was able to prove 38 of the first 52 theorems from Whitehead and Russell's Principia Mathematica. It stole the show at an otherwise uneventful first AI conference and got separate billing when they reported out the results of the conference at MIT in the Fall of 1956 (Simon tells an interesting tale of wrangling with John McCarthy, who coined the term AI and was always the showman, to keep him from stealing their Logic Theorist thunder).

Newell, Simon and Shaw (or NSS as they were known in the decades that followed when they dominated the early AI scene) were already working on AI when conference was announced in 1955, they just called it operations research according to Simon. And they went on the continue that work for decades, combining engineering with innovations in cognitive psychology and business administration - what they often described overall as human problem solving. Simon and Newell went on to win the Turing Award in 1975, and Simon the Nobel in Economics in 1978 for his "pioneering work into the decision-making process...". They were always focused not simply on the engineering of machine intelligence, but also seeking to understand the human cognitive aspects of what made up different aspects of intelligence as a model for complex information processing to approach reasoning in computers. It is no coincidence that they did not embrace the AI term as completely as some of their colleagues.

Newell went on to culminate this life's work on human and machine intelligence in the cognitive architecture called Soar and his accompanying book on the topic, Unified Theories of Cognition, which still stands as one of the better explorations of the topic of cognition more than 30 years later. The ideas in the book, and the very concept of "unification" seem to bookend his opening chapter to his thesis 30 years earlier. In "Information Processing: A New Technique for the Behavioral Sciences" he breaks down the "current information-processing research" into 5 categories:

  1. "...practical workers who deny any connection between their work and a science of human behavior, but who take an engineering approach to some particular task that needs mechanizing."Here he mentions those working on problems like "mechanical translation of languages" and "machine literature searching" as examples.
  2. "...a group concerned with pure artificial intelligence. They too prefer to disclaim any immediate relation to behavioral science, but are working directly to synthesize systems that will show as much of the higher human intellectual functions as possible."Here he mentions Alan Turing, "Computing Machines and Intelligence" (1950) and Claude Shannon (a Dartmouth meeting organizer), "Programming for a Computer to Play Chess" as two good examples.
  3. "A third, rather diverse, group may fairly be called the cybernetic group. Here the fundamental object is to construct a science of human behavior. The point of departure is physiology..."Here he mentions the Homeostat of R. W. Ashby and the mechanical turtles of Grey Walter as some of the direct work on automata that belong to this group, while also noting, "the more directly physiological work of [Warren] McCulloch and [Walter] Pitts" as well.
  4. "A small group can be separated from the cybernetics group by its attitudes towards digital computers. These investigators share in common with the cyberneticists a concern with the science of human behavior at the level of physiology and its first behavioral correlates. However, they use the digital computer as an analytical device for discovering the consequences of various theories, formulated as sets of interacting mechanisms."Here he mentions Nathaniel Rochester (another Dartmouth meeting organizer) for his Hebbian models of the nervous system, "Tests in a Cell Assembly Theory of the Action of the Brain, Using a Large Digital Computer" along with Oliver Selfridge and Gerald Dineen for their, "Pattern Recognition and Modern Computers".
  5. "A group which I would call the information processing group...concerned with the science of human behavior, but the point of departure is at the social and cognitive level. Also, the computer is viewed as a consequences-generating device, and not as a model of human behavior."Here he mentions his group, with Simon and Shaw, as the main active participants.

One of Grey Walter's mechanical turtles circa 1951

For simplicity, let's refer to these groups Newell outlined as 1) engineers, 2) AI engineers, 3) cyberneticists, 4) cyberneticists + digital computer analysts, and the 5) information processing group.

Much has transpired in the field(s) of machine intelligence in the 60+ years since Newell drafted his thesis. Mapping these groups across time could be the work of an entirely separate thesis, but we can trace the broad strokes. The 1) engineers and 2) AI engineers have largely merged while adopting a cartoon simple version of the physiology represented in "neural network" of McCulloch and Pitts (3) that now underlies the impressive fill-in-the-blank feats of current large language models. While the automata portions of the 3) cyberneticists have lineage to today's robust robotics efforts, the more detailed exploration of the physiology of 3) and 4) has largely been forgotten except for in the relatively small (compared to AI) realm of neuromorphic computing. The neuronal and synaptic complexity that has been known for more than a century (see Cajal's drawings in the header image at the top of this article) has still to be considered by much of the AI and artificial neural net community, nor has the deeper functional complexity Von Neumann warned about and the realization of the analog/digital hybridization of the brain that devastated Pitts (see Part 1) really been addressed. Add to this the neurotransmitter and channel complexity and genomic and epigenomic variance in their expression in a single neuron across a lifetime, and you get a brain architecture and physiology complexity (roughly 10^40 different brain states) that has barely been scratched by even the most advanced neuromorphic architectures.

And as for 5) the information processing group of Newell? No robust effort has really taken its place in comparison to the 1) and 2) AI engineering. There has been a move of the 3) and 4) cyberneticists towards the social level Newell and Simon explored, though this has been small in comparison to its robustness in the 60s and early 70s. The reasons for this shift and consolidation are myriad. I've touched upon these briefly in the last part of this series, but will go into it in more detail in the future to align with a podcast series Amicia D. Elliott, Ph.D. and I have been recording (Neuroboros - early episodes soon to be released).

What is missing in today's quest for artificial general intelligence has been the cognitive layer that Simon and Newell focused on, along with the above mentioned neurophysiological and neuroarchitectural complexity. An abundance is known about the latter from decades of research in the various neuroscience branches, but this is diluted to the point of reductionism in most cases and largely unknown by the AI engineers. Similar gaps in knowledge about the cognitive templates of human intelligence also exist (ask 100 engineers what the intelligence in AI is and get a googol answers, or a Googled answer). The philosophy and psychology of cognition is more scattered than even reductionist, but regardless consolidated consensus definitions of these things exist (and you can't really blame the engineers).

A googol

We are heavy on artificial and weak on intelligence. To achieve mechanical intelligence (if we achieve real intelligence in a machine, we can probably dispense with the added "general") we may want to spend a bit more time on the cognitive side, align this with the engineering advances, and supercharge the whole thing with deeper neuromorphic modelling.

Part 3 of this series is now available here.

(Thank you to the CMU Archives and staff for making the Newell and Simon collections available, along with helping me explore the Pamela McCorduck archives of her interviews and notes from her 1979 book Machines Who Think. Those files along with the references and additional reading listed in part 1 are providing the foundation for this series. More references will be added as we dive into detail of additional topics.)





Prof. Dr. Ingrid Vasiliu-Feltes

Deep Tech Diplomacy I AI Ethics I Digital Strategist I Futurist I Quantum-Digital Twins-Blockchain I Web 4 I Innovation Ecosystems I UN G20 EU WEF I Precision Health Expert I Forbes I Board Advisor I Investor ISpeaker

11mo
William Manion

Solo Practice Attorney at William B Manion Attorney-at-Law

11mo

Very interesting. I’m always fascinated by the evolution, or etymology, as it were, of certain ideas and expressions. As you indicate throughout the article, AI has many dimensions and facets to it, similar to but not identical to the brain. Linear development will never get us as far as we can go, nor will it get us where we need to go. It will take the best work of several cooperating disciplines to get there.

Excellent points as the term artificial contributes to us versus them when the melding of analog vs digital is much more similar than we likely know. Plus, humans must join with machine rather than compete with in order to continue the harmony of man and machine. Thank you for sharing!

Ryan Wright

Founder & CEO Nvlope, DePIN Medicine, Radiology Blockchain, Product Architecture

11mo

Very good. I have a generic philosophy about the human/machine "dichotomy". If one were to make something from what's natural, is it artificial? To everything?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics