An advanced discussion overlapping philosophy, complexity, AI systems and catastrophic risks, triggered by an article published in Nautilus Magazine.
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
A thoughtful discussion with Shannon Vallor, written by Philip Ball, on the current manifestation and hype-storm of AI, and the extreme elements in SV that have evolved into what is undeniably a religious cult in LLM firms and portions of Big Techs. I'm in substantial agreement with Shannon on most points. The importance of AI for good can't be dismissed, for example, and no question about the recent trend towards authoritarianism -- even before LLMs, though misused AI is a near perfect tool for them (e.g., the CCP). However, it's important to understand and highlight the difference between Hinton's fear of rogue super intelligence that decides to destroy humanity, which I've always said was far too premature and distracting, and the greatest risks consumer LLM chatbots have already created. LLMs are indeed stochastic parrots, but they are more than that. Due to the vast, unprecedented data scale scraped on every topic, including scientific journals, run on the most powerful supercomputers, combined with inherent security flaws that invite jailbreaks by evildoers, as I posted earlier today -- LLMs are among the top two or three cat risks today. When combined with nuclear bombs, bioweapons/pandemic risk, and efforts to undermine civilization -- particularly by authoritarian regimes, I think consumer LLMs made available to the public represent the greatest risk to our species today. Moreover, these types of risks evolve quickly in chain reactions, undoubtedly to include 'sleeper cells' in state-backed groups. Melanie Mitchell and David Krakauer are probably correct in suggesting that it may represent a new type of intelligence. At very least it's the largest and most dynamic representation of collective intelligence to have ever been unleashed, for better and worse, with all the limitations and risks inherent with the technology, the way it's been recklessly applied, and the powerful perverse incentives driving it.