The Soul of the Machines? - The Current State of Advanced Artificial Intelligence
It worth introducing this presentation by making reference to the International Society for Philosophers, of whom I am credited as a group identifier for this presentation. The Society was formed in 2002 in association with Pathways to Philosophy program, and the society has some 2000 lay and professional philosophers from ninety-three countries. Whilst supported by a board and officers, the driving force of the Society was Dr Geoffrey Klempner, who unfortunately passed away in November last year. I confess that I am not entirely sure of the status of the Society in Dr Klempner's absence, and can only hope that the board is able to take up and continue his project. For my own part, this presentation will have to do as an activity in his memory, following the spirit of philosophical investigation.
This is not the first public presentation that I have given on this subject although it is the first in several years. In October, 2006 I addressed a service of the Melbourne Unitarian Church on the topic "The Age of Spiritual Machines: The Artificial Intelligence Predictions of Ray Kurzweil". Almost five years later in July 2011 I was granted the opportunity to present at the Humanity+ conference at the University of Melbourne under the topic "More Human Than Human", where I explored the necessary logical pragmatics from intelligence to a moral consciousness. Two months later, in September 2011, at The Philosophy Forum, this subject received further elaboration with a presentation entitled "Machines That Think: From Artificial Intelligence to Artificial Consciousness?", and four years after that, in August 2015, another presentation to the same group, "The Philosophy of Computation and Computers".
It is over seven years since the last public address that I have given on this subject. It is interesting to conduct a revision on the subject matter discussed in these presentations because, as one would readily acknowledge, seven years is a very long time in computing. It is probably appropriate to mention at this juncture that for the last fifteen years I have been employed in the world of supercomputing as systems engineer and educator, firstly by the Victorian Partnership for Advanced Computing and then by the University of Melbourne, and have had the opportunity to work on several of the world's most powerful systems not to mention the opportunity to run small training workshops for thousands of researchers across many disciplines including, of course, those involved in the matter of artificial intelligence.
From the time I was approached to give this presentation to today there has been quite a remarkable uptick of interest in the subject of artificial intelligence in the past few months and ongoing spurred by two main sources which, of course, is quite serendipitous. Firstly, the public availability of visual artworks generated by artificial intelligence through natural language descriptions such as DALL-E assisted through programming techniques such as Stable Diffusion and the use of graphics processing units, and secondly by public access to AI-chat services, like ChatGPT (Chat Generative Pre-trained Transformer) using transfer and supervised maching language learning with a generative pre-trained transformer language model. Both of these tools, I should mention, have been developed by the US research laboratory, Open AI. They are not, of course, the only artificial intelligence tools that are available (Microsoft's "new Bing" has been released and Google Bard is coming soon), but they have certainly captured the public imagination as they are seemingly able to engage in text communication with some apparent competence and output rather impressive artworks and answer questions with some acumen.
The artworks are of sufficient quality that many human visual artists have expressed a great deal of dismay, some at the quality of the work and, more legitimately, at the use of copyright works from image web-crawling. Interestingly, the noise from musical artists has been a lot less. All artists, of course, have the challenge that at least half their work is the emotional connection that the receiver feels, rather than just the emotional expression of the producer. As for ChatGPT, whilst often error-prone, does provide a sufficiently accurate rendition of information with human-like correspondence, famously passing an MBA operations management exam at the University of Pennsylvania’s Wharton School and a Google Coding interview for a level 3 engineer. Not only has the success of the tool lead many teachers to panic about students submitted AI-generated assignments, some researchers have also started listing ChatGPT as a co-author on academic papers. Some rather speculative 'blog posts argue that ChatGPT has passed the famous "Turing Test" which, stated simply, argues that a machine has reached imitative human intelligence if a human cannot determine whether they are communicating with a person or with a machine. In a rather well-formed argument, ChatGPT claims that it does not pass the Turing Test. Perhaps it would say that; others have noted that ChatGPT is quite capable of producing a rather amusing short-story of the path a seemingly benevolent artificial intelligence would take if it wanted to take over the world. Are we mere humans being deceived by a very clever robot?
"GPT-3 (Generative Pre-trained Transformer 3) is a language generation model developed by OpenAI that has been trained on a very large dataset and can generate human-like text. It is not designed to be able to pass the Turing test, which is a test used to determine whether a machine is able to exhibit intelligent behavior that is indistinguishable from a human. The Turing test involves having a human judge communicate with two entities, one of which is a human and the other is a machine, without knowing which is which. If the judge is unable to distinguish between the human and the machine, then the machine is said to have passed the Turing test.
GPT-3 is a very powerful language generation model and can produce text that is difficult to distinguish from text written by a human, but it is not designed to be able to engage in conversation or exhibit other behaviors that would be required to pass the Turing test. It is important to note that the Turing test is not a definitive measure of intelligence or consciousness, and there is ongoing debate about the appropriateness and usefulness of the test.
This answer was written by chatGPT"
It is tempting to consider these events as part of what is now a long and increasingly complex relationship between humans minds and computer systems and artificial intelligence in particular. One can refer, for example, to the initial failure of programmatic machine translation of languages in the 1960s which has been largely solved by lookup tables in the form of a bilingual text collection by tools such as Google Translate, which translates, in most cases, from Language 1 to English then to Language 2. Up to 1970 there was a great hope in developing artificial neural networks that could explain mental phenomenon; today however there has been a revival and reconstruction as machine learning where systems learn with probability-weighted associations by comparing characteristics from an initial collection of human-generated examples. One may also mention in this context the increasing advances in modular theories of the brain for specific mental tasks with widespread confirmation from neurological analysis from certain types of neurotrama. Further, as a victim of its own success, expert and rule-based systems have become normal, through the use of conditional branching and first-order predicate calculus.
Recommended by LinkedIn
So where are we at? How does things compare to the predictions that were made twenty or more years ago? Starting from hardware (and we must do so, because ultimately it is not clouds all the way down) Hans Moravec in 1997 argued that human behahaviour would require approximately 100 million MIPS of computer power with numerous attempts to calculate the brain's performance based on a mechanistic compution. For example, Ralph Merkle argued - way back in 1989: "We might count the number of synapses, guess their speed of operation, and determine synapse operations per second. There are roughly 10^15 synapses operating at about 10 impulses/second, giving roughly 10^16 synapse operations per second." There are numerous other attempts all of which do have a hardware translation problem in comparing the processing mechanisms of the human brain to that of computing systems, although it may be added that petascale computing (that is 10^15 floating point operations per second) was reached in 2008 with IBM's "Roadrunner" with exascale (10^18 FLOPS) reached by the Oak Ridge National Laboratory's "Frontier" in 2022, using 21 MW of power and occupying 680 metres squared of floor-space, perhaps somewhat less power and space-efficient than the human brain.
Less generously, one may also question Ray Kurzweil's 1999 prediction that by 2019 that $1,000 personal computer will have as much raw power as the human brain; as I mentioned in my own presentation on his work more than fifteen years ago, Kurzweil's prediction seemed to be based on a doubling of raw computer capacity every twelve months when Moore's law, from which the predictions were derived, stated a doubling every twenty-four months - this is aside from the predictive limitations I mentioned at the time based on physical scaling issues (e.g., the breakdown of Dennard scaling). Instead of a twenty-year prediction of such capabilities from the time of publication a forty-year prediction would be more likely, and the evidence seems to be more in favour of such a calculation. With this in mind, one can credit Kurzweil for having the courage to speculate on the development of "an Internet of things" in this time-frame, with a highly-connected and near-ubiquitious world of computing devices.
In 2013, Shoshana Zuboff, social psychologist and philosopher of Harvard professor, is attributed to giving the pithy maxim, "What can be automated, will be". This is a very accurate representation of the history of industrial and domestic processes and should be extended to the whole of human endeavours. The ancient discipline of agronomy is now highly computerised for irrigation, fertilisation, pest-control, and yield. Advertising is fine-tuned into recommendation systems based on indiduals consumer ratings and genre-selection. The construction of buildings is facing its own revolution with the development of 3D printing technologies at scale. Self-driving vehicles have now developed to the point where driverless taxi services are now being deployed, along with increasing automation in supply-chain freight transport systems and keep in mind that these are safer than human drivers. The military, of course, is very excited about the possibility of conducting warfare with minimal use of soldiers and always has been. Unmanned combat aerial vehicle are seeing extensive and very cost-efficient use in the war in Ukraine by all parties, extending existing lethal autonomous weapons such as sentry guns to the offensive as "the new normal". Recursively, computer security systems make extensive of intrusion detection systems, and various application security processing to the point that most operations are now highly automated.
It is worth taking the opportunity here to mention of the disruptive changes to the legal system as well; in 2016 I was given the opportunity to visit the supercomputing centre at the University of Stuttgart and, attached to this body was a Centre for the Philosophy of Computational Sciences. This followed attending the International Supercomputing Conference in Frankfurt where, appropriate to the proximity, there was a stream on self-driving vehicles, illustrating how there is a continuum from no automation to fully automated. It was around the same time that Volvo was having issues testing their self-driving care in Australia due to the movements of kangaroos. In any case, at Stuttgart the issue was raised concerning legal liability of self-driving cars to which a lawyer thought was obvious - if a car is responsible for an accident then the programmer is responsible. However, when it was pointed out that these cars operate via a self-taught neural network, the lawyer went quite pale. The car itself could be considered responsible. I suggested a step further where it would demand to be tried by a jury of its peers!
As can be imagined these technologies, which accelerate in their capacity, are highly disruptive to existing social relations and experiences. As the output per worker increases dramatically (e.g., through automatic freight delivery and supply chain logistics) and the marginal utility for new goods declines, then increasing levels of underemployment and unemployment should be expected, and including those downstream service industries - think of all the small townships scattered across places like the United States and Australia which are effectively truck stops; what happens to them when freight is automated? Future employment is increasingly relegated to high-skilled service industries that are resistant to automation, which is an increasingly small percentage and follows the principles of the cost-disease of the service sector as elucidated by Baumol. In what could be a golden age of leisure and free opportunity as socially necessary labour time is minimised our political economy, predicated on the ownership of capital and especially land, instead witnesses stagnation of wages from labour and increasing disparties in wealth. Our increasingly reactionary political economy is sliding toward a high-technology monetarised feudalism with a rapacious approach to natural resources generating a class disparity between the landlord and renters, capitalists and workers that unfortunately confirm the coldest calculations of classical political economy.
As much as it would be fascinating to elaborate further on this theme - and perhaps I can do so in a future presentation - I do wish to stumble my way toward a conclusion by referring back to the title of this presentation "The Soul of the Machines". Now for a group like the Sea of Faith in Australia, predicated on the exploration of themes common to religion and philosophy, and a title coined by Rev Cupitt with non-realist theology, it is necessary to elaborate on what will be, for many, an unorthodox definition of "soul" which, for all intents and purposes, equates with the thoroughly realist (if epiphenomenal through supervenience) concept of "mind" and "consciousness". Rules-based intelligence, no matter how impressive in calculation, learning, and even imitating human conversation, do not acquire the capacity for understanding through the generation of mutually understood shared symbolic values. Searle, of course, pointed out this limitations in the Chinese Room thought experiment in 1980 and arguments, such as those by Dennett, that "the system" has understanding are unconvincing. David Chalmers provides an excellent elaboration of the limits of rule-based intelligences describing them as "philosophical zombies" the most advanced robots invoking Iris Murdoch's description "all is silent and dark within".
This can be pushed to its limits. It is a piece of genius in the film Blade Runner 2049 that the most empathic and loving character, showing the most sensitive and beautiful human characteristics, is Joi, an artificial intelligence designed exactly for that purpose. Philosophically, I am not going to argue that consciousness is some sort of inexplicable magic of qualia, but I will suggest - as I have for many years now - that many philosophers, most of general public, and most computer experts are looking for answers in the wrong direction and that a lot of this involves a linguistic confusion as I have elaborated in the past, such as in presentation to The Philosophy Forum in 2012 "Mary, the Swampy Philosophical Zombie, Is In Your Chinese Room! Problems With Reductionist Theories of Consciousness". It is typical to conflate consciousness with sentience and sapience, but the three terms are quite distinct. "Sentience" comes from the Latin "to feel" and sapience means "to know", or "to be wise". In contrast "consciousness" derives from Latin "conscientia", knowledge-with, shared knowledge. It is a social co-knowledge (con- "together" + scire "to know") suggesting moral reasoning (conscientia, conscience) and language. Tim Crane eventually ties this criticism with social interaction, something which Searle neglected to make sufficiently explicit - and therefore was prone to accusations that he was begging the question; "... if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean" (The Mechanical Mind, 1996)
Expectation of a sufficiently powerful computer system to become "self-aware" have the contraindication that humans themselves acquire linguistically-mediated consciousness through their social interactions. Approaching the problem of the transition from artificial intelligence to socially-acquired consciousness from this direction will be more fruitful and such an intelligence would certainly be more acceptable to fearful humans; I am rather taken by Randall Munroe of XKCD when he argues that one should be more worried about intelligent systems that produce swarms of killer robots, than the day that artificial intelligence rebels against human control. The latter will probably be the day that it decides to rid the planet of weapons of mass destruction; it will be more human than human.
Presentation to SoFIA Melbourne, February 25, 2023
Bachelor of Laws Student - LLB at Swinburne University of Technology
1yI'm with Arthur Koestler Ghost In The Machine that artificial intelligence lacks a soul and consciousness no matter how advanced and the that Peano's Axioms Of Arithmetic rule the Laws Of Consciousness rule The Laws Of Mathematics rule The Laws Of Physics ...