To the Global AI Literacy: narrow AI/LLM/GPT-4 is a nonsense AI (NAI): generative AI is degenerative AI

To the Global AI Literacy: narrow AI/LLM/GPT-4 is a nonsense AI (NAI): generative AI is degenerative AI

Illiteracy, not as the ability to write, read and count, but as the lack of knowledge, general and specific, is a global epistemic pandemic. The forms of illiteracy are as many as listed below:

  • Literal illiteracy
  • Cultural illiteracy
  • Civic illiteracy
  • Racial illiteracy
  • Financial illiteracy
  • Numerical illiteracy
  • Statistical illiteracy
  • Factual illiteracy
  • Scientific illiteracy
  • Technological illiteracy
  • Environmental illiteracy

A new sort of illiteracy affecting the whole world which is data and AI illiteracy.

As to WEF, without universal AI literacy, AI will fail us. "With AI already transforming every aspect of our personal and professional lives, we need to be able to understand how AI systems might impact us — our jobs, education, healthcare — and use those tools in a safe and responsible way".

Meantime, a global AI illiteracy is exploited by its stakeholders to capitalize on many simple minds, be it politicians or economists, investors and businessmen, scientists and engineers, or the general public.

It partly could be explained by the fact that today's AI in the forms of ML and DL and ANNs is a nonsense AI, without any sense and meaning, any general intelligence or general knowledge of the world. The Nonsense AI is like a nonsense song, written mainly for the purpose of entertainment using nonsense syllables, with a simple melody, quick tempo and repeating sections, marked by absurdity and the ridiculous.

As an example, it could serve the 150 pages article " Sparks of Artificial General Intelligence: Early experiments with GPT-4" and its reaction as the open letter "Pause Giant AI Experiments: An Open Letter".

The first one claims that "beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system".

To prove its nonsense claims, there was used one of many definitions: "Intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience".

The other one is unintentionally to hype the nonsensical ChatGPT posting the nonsense demand: "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". Here is the message:

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable". And here are some signatories (totally exceeding now 50+K):

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"

Elon Musk, CEO of SpaceX, Tesla & Twitter

Steve Wozniak, Co-founder, Apple

Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem

Why It is Nonsense AI

LLM/GPT-4 is a piece of stochastic/statistic/random/probabilistic AI implying uncertainty, correlative patterns and unpredictability, while real AI implies certainty, causal patterns and predictability.

To be real intelligent, it needs to leapfrog to Real/Causal AI and then another quantum leap to Interactive AI.

Stochastic AI (LLMs) > Causal AI > Interactive AI

As for now, it all relies on your prompting, being intelligently dumb, dull and defective.

In other words, driven by blind statistics and stochastic algorithms, today's AI should be classified as the Nonsense AI, with its branches as ML and DL and ANNs, and its applications as large language models (LLMs) or generative AI tools, mindlessly spewing out nonsensical content, including audio, code, images, text, simulations, and videos, like as ChatGPT, DALL-E 2, and Bing AI (see it is listed above).

As such, all LLMs, as ChatGPT-3-4 or Google's Bard, belong to DE-generative AI systems or synthetic media, capable of generating text, images, codes, video and other media in response to prompts (NL queries), while missing general or specific knowledge about the world.

The Truth about AI: pros and cons

AI has been touted as the technology that can surpass human intelligence and behavior.

While AI has the potential to revolutionize industries and contribute trillions of dollars to the global economy, it is important to dispel some common misconceptions about what AI truly is and how it works.

AI is often portrayed as a human-mimicking technology that poses significant safety and security risks. The reality is that AI is not capable of replicating human intelligence and behavior in its current form. Instead, AI is a set of algorithms and statistical models that can analyze and make predictions based on large datasets.

It is critical for governments, leaders, and decision makers to develop a firm understanding of the fundamental differences between artificial intelligence, machine learning, and deep learning.

AI applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, and decision trees. AI recognizes patterns from vast amounts of quality data providing insights, predicting outcomes, and making complex decisions.

There are three types of AI:

  • Artificial narrow intelligence (ANI), which has a narrow range of abilities
  • Artificial general intelligence (AGI), which is on par with human capabilities
  • Artificial superintelligence (ASI), which is more capable than a human 

The current AI systems are regulated by other existing regulations such as data protection, consumer protection and market competition laws.

Machine learning (ML) is a subset of AI that utilises advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa and Apple’s Siri improve every year thanks to constant use by consumers coupled with the machine learning that takes place in the background.

Deep learning (DL) is a subset of machine learning that uses advanced algorithms to enable an AI system to train itself to perform tasks by exposing multilayered neural networks to vast amounts of data. It then uses what it learns to recognize new patterns contained in the data. Learning can be human-supervised learning, unsupervised learning, and/or reinforcement learning, like Google used with DeepMind to learn how to beat humans at the game Go.

The confusion around AI may stem from the marketing efforts of big tech companies, who often use terms like "machine learning," "deep learning," and "neural networks" interchangeably with AI. These technologies are certainly important components of AI, but they are not the whole story. AI is a much broader field that encompasses a wide range of techniques and approaches.

It is true that AI has the potential to generate trillions of dollars in economic value. According to a study by PwC, AI could contribute up to $15.7 trillion to the global economy by 2030. However, this potential can only be realized if we have a clear understanding of what AI is and how it works.

One of the biggest misconceptions about AI is that it is capable of making decisions based on a "model of the world" that is as complex as that of a human being. While AI models can certainly be trained to recognize patterns and make predictions based on those patterns, they do not have a true understanding of the world. This means that they are limited in their ability to make decisions that are truly nuanced and complex.

AI can be improved over time. As we continue to develop new algorithms and techniques, we may be able to create AI systems that are more capable of mimicking human intelligence and behavior. It is also important to acknowledge that AI is already having a significant impact on a wide range of industries, from healthcare to finance to transportation.

It is essential to recognize that AI is not a counterfeit of human intelligence, but rather a powerful tool that can help us analyze and understand complex datasets. By dispelling some of the common misconceptions about AI, we can have a more nuanced and accurate understanding of its potential and limitations. As we continue to develop and refine AI technologies, we may be able to unlock even more of its potential to improve our lives and the world around us.

If LLMs/GPT-4 are the ways to AGI

To build fully self-knowing AI beings, you need a real and true AI paradigm instead of mimicking/simulating/replicating humans, our bodies and brain, brains, mind, intellect, learning, intelligence and willing, to effectively interact with other machines, humans and the world as a whole.

Broadly, there are two categories of technology/machine/computing/cyber/artificial intelligence:

a false, fake or fool AI (3fAI), operationalized by ML, where predictive models are trained on historical data and used to make future predictions; deep neural networks under the moniker of deep learning; foundation models trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks; current examples include BERT. GPT-3, and CLIP.

vs.

Real, True and Intelligent AI, or Transdisciplinary AI, Trans-AI.

Again, the foundation models are about the same deep fake AI. It its a narrow, weak, human-replacing, big-tech AI of statistic, biased-data - driven ML/DL/ANNs.

The genuine, human-augmenting AI is Causal Machine Intelligence and Learning, emerging as Trans-AI or Meta-AI, combining as its integrated modules:

ANI, ML, DL, ANNs, LLMs, Machine Perception, Computer Vision,

Foundation Transformer Models, AGI, ASI,

Contextual, Composite, Causal AI.

If degenerative AI is creative as humans

Creativity, exploratory, transformational, or combinational, could be an attribute of reality of LLMs. Machine creativity as computational creativity, artificial creativity, mechanical creativity, creative computing or creative computation is to complement human creativity

With powerful language machines, we have two types of creativity:

  • Stochastic Creativity or Imitative Originality
  • Spontaneous Creativity or Real Originality

The first one is typical for narrow/weak AI models, combining the data points (tokens) probabilistically, manipulating petabytes of language data of various modalities.

Creativity is stochastic if there is uncertainty or randomness involved in the outcomes. Stochastic is a synonym for random and probabilistic, although is different from non-deterministic. Many machine learning algorithms are stochastic because they explicitly use randomness during optimization or learning.

The second one is/was typical for humans, when ideas as if were coming out of nowhere, "divinely inspirited". Ideas are coming out of nowhere? They come without any external force, cause, influence, treatment or stimulus, SPONTANEOUSLY.

Today, humans are losing their originality and individuality, being rewarded only for common things, thus becoming average in all respects, like weak/narrow AI models.

This is why the open letter "calls on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

Uncreative/unoriginal/common people are easily displaced by powerful AI systems, for stochastic creativity imitates well.

It is wisely noted: "originality is undetected plagiarism". Humans and machines have reached the same level: Combinatorial Creativity.

True, exploratory and transformational creativities are no more requested.

 Again, real originality/creativity/innovation implies general intelligence, integrated world-views, or general knowledge about the world

Standard Universal Ontology (SUO) Model vs. LLMs

It is formally integrated by the Standard Universal Ontology (SUO) model of the world, its contents, and data universe (data sets and data points and causal data relationships).

Standard Universal Ontology (SUO) is the comprehensive, consistent and computing model of Reality which is based on the Mathematical Modelling and Computing Simulation of the World of all Possible Realities.

The SUO is the fundamental core of any intelligent structures, processes, and activities, mind and intelligence, philosophy, science and engineering, technology and industry, social order, economy and government.

It embraces fundamental categories and concepts, classifications and taxonomies, theories and ontologies, semantic models and knowledge graphs, scientific methodologies, intelligent algorithms, computing models, statistical techniques, data analytics models, etc.

Unlike language models assigning non-zero probabilities to linguistically valid sequences that may never be encountered in the training data (hallucinating), the SUO is causally and semantically ordering the digital infinity of languages (an infinite variety of valid sentences).

Pause Giant AI Experiments: An Open Letter

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data.

In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

The Truth About AI: Dispelling Misconceptions and Understanding Its Potential 

To view or add a comment, sign in

More articles by Azamat Abdoullaev

Insights from the community

Others also viewed

Explore topics