Why "creating machines in the human image" is not rational and wise?

Why "creating machines in the human image" is not rational and wise?

Building Machines in our Image, How do you define intelligence? We each have our own notions of what it means to be intelligent—perhaps being skilled in math or adept in social situations. But providing a general definition is surprisingly difficult. Herein lies the challenge for artificial intelligence, or AI: how do we structure scientific study around a term that is typically reserved for humans?

While progress has been made in mimicking aspects of human intelligence, the human-focused origins of the field of AI may be limiting the scope of our scientific pursuit. As we move forward, perhaps looking beyond ourselves for inspiration will provide a more comprehensive definition of intelligence, or a new concept altogether.

The concept of ‘intelligence’ comes from human psychology, where it is measured using IQ tests. As one cannot directly measure an abstract concept like intelligence, these tests instead evaluate a range of tasks, from reasoning to memory and verbal comprehension. Not surprisingly, when AI adopted the term intelligence over a half-century ago, along came a focus on performing similar cognitive tasks.

As a famous example, the Turing test pits a machine against a human, with the machine using written conversation to attempt to convince the human that it is also human. Similar motivations fuel modern-day efforts to use computers to master boardgames like Go, human pursuits that require coordinated actions over many steps. Human influences are also present in tasks like processing language or identifying objects in images. In the absence of a clear definition of intelligence, these approaches implicitly assume that human tasks are a proxy for human intelligence, with the hope that a machine capable of performing these tasks and more will attain ‘artificial general intelligence,’ becoming flexible enough to perform any task.

An example of a machine classifying objects in an image, The machine uses AI to rate how likely each image is to be of a particular type.

While such generally intelligent machines do not yet exist, we have made advances in many cognitive tasks, impacting society through applications like self-driving cars, facial recognition, and language translation. This progress has been largely the result of deep neural networks, mathematical models which are loosely inspired by the biological neurons in brains. Through a process of learning to map input data (e.g. a photo) to corresponding outputs (e.g. what objects that photo contains), machines are now becoming capable of tasks like recognizing, reasoning about, and manipulating objects. Mastering many of these basic human cognitive capabilities now seems on the horizon. However, it remains unclear whether such machines would unlock the mysteries of our own range of capabilities or those of other organisms, let alone general intelligence. Rather than exploring broader, fundamental principles underlying intelligent systems, the field of AI has been, in effect, teaching to the Turing Test—focusing on mimicking our own human capabilities at ever-increasing levels of sophistication.

As AI keeps advancing, this adherence to a human-centric view of intelligence could have major consequences. I’m reminded of the Copernican revolution: for centuries, astronomers placed the Earth at the center of the universe, with our desire for significance guiding our conception of reality. However, when observations did not align with this theory, we came to understand that the Earth orbits the Sun. In a similar way, I feel that we have placed humans at the center of our definition of intelligence. Clearly, we have unique capabilities, just as our planet is unique among its neighbors. Yet, any comprehensive definition of intelligence should account not only for our own capabilities, but those of other entities as well. Looking to other biological and human-made entities will also help us see ourselves within a broader scope of intelligence, like studying the Earth in the context of other planets.

Heliocentric vs. geocentric depictions of planetary motion. Until the geocentric view was discredited, we placed ourselves at the center of the universe. A similar phenomenon may be affecting our perception of intelligence.

When we look at biology, we see systems that sense and respond to their surroundings. One such system is the cell, which has sensors for chemicals, as well as actions it can take in response, like going into “hibernation”. Entire multi-cellular organisms can also be considered as systems. Animals, from the smallest insect to the largest whale, interpret and interact with their environments in a multitude of ways.

In nature, we find systems at multiple scales that sense and respond to their environments, from individual cells up to collections of multicellular organisms.

Plants are attuned to sensory inputs like sunlight, moisture, and temperature, prompting responses like orienting leaves, extending roots, and releasing seeds. And groups of organisms, from forests of trees to colonies of ants, collectively sense and respond to their environments in ways that we are still just beginning to understand. Ultimately, all of these processes share a common form: they convert energy into actions, affecting themselves and their environments to promote the survival of genes.

Our technological inventions can also be viewed from this systems perspective. The tools of our early ancestors, like spears and boats, expanded the ways in which they could respond to their environments. More recent inventions, like radios and cameras, have similarly expanded the ways in which we sense our environments. Modern advances in computing, and now AI, have taken this trend further, creating systems that can sense and respond to their environments largely independently of human input.

#HumanAI #FutureofAI #AIforGood #DomoreWithAI

Alexandre MARTIN

Polymath & Self-educated ¬ Business intelligence officer ¬ AI hobbyist ethicist - ISO42001 ¬ Editorialist & Business Intelligence - Muse™ & Times of AI ¬ Techno humanist & Techno optimist ¬

8mo
Like
Reply
Manoj Agrawal

Group Editor at Banking Frontiers; Founder Director at Glocal Infomart Pvt. Ltd.; Editor at FIDC News

8mo

Good points Pinaki. We need non-anthropic approaches and metaphors to get a better understanding of intelligence.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics