Machine Intelligence is not Artificial, Part 2
(This essay is Part 2 of an ongoing series, Machine Intelligence is not Artificial (MIINA) - MIINA Part 1: (Four Funerals & a Divorce), MIINA Part 2: (Allen Newell's Dissertation), MIINA Part 3: In the Beginning, MIINA Part 4: Cybernetics & Norbert Wiener, MIINA Part 5: The Ratio Club & British Cybernetics, MIINA Part 6: Dartmouth 1956, The Birth of AI and the Balkanization of Machine Intelligence, MIINA Part 7: Interlude)
Artificial intelligence (AI) is only a portion of what is possible with machine intelligence, and it is through a realignment of the neural / cognitive sciences and computer engineering that we will achieve "general" machine intelligence - i.e. Machines Who Think.
I gave an overview of this hypothesis on machine intelligence in the first part of "Machine Intelligence is not Artificial." Here, I would like to take a deeper dive into some historic context relating to my thinking, specifically the doctoral thesis work of AI pioneer Allen Newell, "Information Processing: A New Technique for the Behavioral Sciences" (1957) Carnegie Institute of Technology (now Carnegie Mellon University (CMU)). His was the first doctorate granted for work on AI, a term coined for the 1956 Dartmouth meeting, and probably the first to focus on any sort of machine intelligence.
Newell along with his advisor, the future Nobel prize winner Herb Simon, were the only participants of that first AI conference at Dartmouth to show up with a working AI program - Logic Theorist, created with Cliff Shaw who worked with Newell at RAND. Logic Theorist was able to prove 38 of the first 52 theorems from Whitehead and Russell's Principia Mathematica. It stole the show at an otherwise uneventful first AI conference and got separate billing when they reported out the results of the conference at MIT in the Fall of 1956 (Simon tells an interesting tale of wrangling with John McCarthy, who coined the term AI and was always the showman, to keep him from stealing their Logic Theorist thunder).
Newell, Simon and Shaw (or NSS as they were known in the decades that followed when they dominated the early AI scene) were already working on AI when conference was announced in 1955, they just called it operations research according to Simon. And they went on the continue that work for decades, combining engineering with innovations in cognitive psychology and business administration - what they often described overall as human problem solving. Simon and Newell went on to win the Turing Award in 1975, and Simon the Nobel in Economics in 1978 for his "pioneering work into the decision-making process...". They were always focused not simply on the engineering of machine intelligence, but also seeking to understand the human cognitive aspects of what made up different aspects of intelligence as a model for complex information processing to approach reasoning in computers. It is no coincidence that they did not embrace the AI term as completely as some of their colleagues.
Newell went on to culminate this life's work on human and machine intelligence in the cognitive architecture called Soar and his accompanying book on the topic, Unified Theories of Cognition, which still stands as one of the better explorations of the topic of cognition more than 30 years later. The ideas in the book, and the very concept of "unification" seem to bookend his opening chapter to his thesis 30 years earlier. In "Information Processing: A New Technique for the Behavioral Sciences" he breaks down the "current information-processing research" into 5 categories:
For simplicity, let's refer to these groups Newell outlined as 1) engineers, 2) AI engineers, 3) cyberneticists, 4) cyberneticists + digital computer analysts, and the 5) information processing group.
Much has transpired in the field(s) of machine intelligence in the 60+ years since Newell drafted his thesis. Mapping these groups across time could be the work of an entirely separate thesis, but we can trace the broad strokes. The 1) engineers and 2) AI engineers have largely merged while adopting a cartoon simple version of the physiology represented in "neural network" of McCulloch and Pitts (3) that now underlies the impressive fill-in-the-blank feats of current large language models. While the automata portions of the 3) cyberneticists have lineage to today's robust robotics efforts, the more detailed exploration of the physiology of 3) and 4) has largely been forgotten except for in the relatively small (compared to AI) realm of neuromorphic computing. The neuronal and synaptic complexity that has been known for more than a century (see Cajal's drawings in the header image at the top of this article) has still to be considered by much of the AI and artificial neural net community, nor has the deeper functional complexity Von Neumann warned about and the realization of the analog/digital hybridization of the brain that devastated Pitts (see Part 1) really been addressed. Add to this the neurotransmitter and channel complexity and genomic and epigenomic variance in their expression in a single neuron across a lifetime, and you get a brain architecture and physiology complexity (roughly 10^40 different brain states) that has barely been scratched by even the most advanced neuromorphic architectures.
Recommended by LinkedIn
And as for 5) the information processing group of Newell? No robust effort has really taken its place in comparison to the 1) and 2) AI engineering. There has been a move of the 3) and 4) cyberneticists towards the social level Newell and Simon explored, though this has been small in comparison to its robustness in the 60s and early 70s. The reasons for this shift and consolidation are myriad. I've touched upon these briefly in the last part of this series, but will go into it in more detail in the future to align with a podcast series Amicia D. Elliott, Ph.D. and I have been recording (Neuroboros - early episodes soon to be released).
What is missing in today's quest for artificial general intelligence has been the cognitive layer that Simon and Newell focused on, along with the above mentioned neurophysiological and neuroarchitectural complexity. An abundance is known about the latter from decades of research in the various neuroscience branches, but this is diluted to the point of reductionism in most cases and largely unknown by the AI engineers. Similar gaps in knowledge about the cognitive templates of human intelligence also exist (ask 100 engineers what the intelligence in AI is and get a googol answers, or a Googled answer). The philosophy and psychology of cognition is more scattered than even reductionist, but regardless consolidated consensus definitions of these things exist (and you can't really blame the engineers).
We are heavy on artificial and weak on intelligence. To achieve mechanical intelligence (if we achieve real intelligence in a machine, we can probably dispense with the added "general") we may want to spend a bit more time on the cognitive side, align this with the engineering advances, and supercharge the whole thing with deeper neuromorphic modelling.
Part 3 of this series is now available here.
(Thank you to the CMU Archives and staff for making the Newell and Simon collections available, along with helping me explore the Pamela McCorduck archives of her interviews and notes from her 1979 book Machines Who Think. Those files along with the references and additional reading listed in part 1 are providing the foundation for this series. More references will be added as we dive into detail of additional topics.)
Deep Tech Diplomacy I AI Ethics I Digital Strategist I Futurist I Quantum-Digital Twins-Blockchain I Web 4 I Innovation Ecosystems I UN G20 EU WEF I Precision Health Expert I Forbes I Board Advisor I Investor ISpeaker
11moKatja Rausch
Solo Practice Attorney at William B Manion Attorney-at-Law
11moVery interesting. I’m always fascinated by the evolution, or etymology, as it were, of certain ideas and expressions. As you indicate throughout the article, AI has many dimensions and facets to it, similar to but not identical to the brain. Linear development will never get us as far as we can go, nor will it get us where we need to go. It will take the best work of several cooperating disciplines to get there.
Psychiatrist
11moExcellent points as the term artificial contributes to us versus them when the melding of analog vs digital is much more similar than we likely know. Plus, humans must join with machine rather than compete with in order to continue the harmony of man and machine. Thank you for sharing!
Wow Sean.
Founder & CEO Nvlope, DePIN Medicine, Radiology Blockchain, Product Architecture
11moVery good. I have a generic philosophy about the human/machine "dichotomy". If one were to make something from what's natural, is it artificial? To everything?