Unleashing the Power of NeuroPerformance: Navigating the Risks and Rewards of AI in a Changing World
AI is quickly sweeping the world and creating transformation and change on an unprecedented scale. The significance of this technology and its impact is highlighted by the warnings issued by those who are helping to create it.
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction…” – Elon Musk
“I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have…” – Godfather of A.I., Geoffrey Hinton
"I'm particularly worried that these models could be used for large-scale disinformation…Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks." – OpenAI CEO, Sam Altman
“It will be possible with AI to create-- you know, a video easily. Where it could be Scott saying something, or me saying something, and we never said that. And it could look accurate. But you know, on a societal scale, you know, it can cause a lot of harm.” – CEO of Google, Sundar Pichai
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” – Open letter from more than 1,000 technology leaders and researchers
Most everyone is evaluating the implications of AI on their industry and businesses, or if they aren’t, they need to start doing so immediately. What is essential to our evaluation is that we consider and understand the broader implications of this technology for the future of the world.
Don’t get me wrong, I am not a naysayer of progress in technology. On the contrary, I’ve already found ChatGPT to be quite helpful and useful in a variety of ways for my own business. I can imagine even more ways it could have been helpful to me in my past roles. I can see all the potential opportunities that exist with the advances of AI and machine learning to improve our world and lives.
Likewise, I can see the many ways these technological advances could destroy us. I know it sounds dramatic. I know those that are speaking against AI sound dramatic. But stick with me for a minute and I’ll explain why it’s important for you to consider the future of human performance in conjunction with the advances in technology.
AI has at least two significant limitations we need to consider: (1) AI is only able to learn based upon past data and only on data that it is given or can access and (2) AI has no body or physical being.
Why are these important?
Past & Provided Data
If AI can only learn from what it is given or can access, then anything that hasn’t been written down, shared, or recorded, is outside of its understanding. Who we are as humans electronically is quite different than how we show up or interact with one and other. What we are willing to say or share online versus how we show up physically with others can be quite different. In fact, it's created an entire world of trolls willing to lob insults and accusations from the anonymity of their avatar character behind a keyboard.
Those who are creating AI are only a limited set of the population and are not fully representative of the human experience. Their perspectives cloud and bias the decisions they make when programming and supplying the AI with data. If we are not careful, their decisions could result in an AI that is built on false premises and inaccurate information. In fact, AI has already shown it can offer up false information, stories, and even entirely made-up sources of information.
Furthermore, the written content uploaded into technology represents a disproportionate viewpoint. For centuries, women and minorities were prevented from getting an education or reading and writing at all. Religious institutions, monarchies, and the elite kept tight control and had almost exclusive access to knowledge and education for centuries. Also, much of what has been written was written to entertain rather than educate. If AI bases its perception of humanity and the world on the fictional written works, it will be heavily skewed to favor conflict, negativity, and exciting stories rather than true depictions of life as a human.
While I could continue with reasons the data AI depends upon is insufficient, I believe the far greater issue is the lack of body or being.
No Body or Being
This element is probably even more important than the first. No matter how smart or how quickly AI can learn and operate, AI does not have a physical biological body. We are still learning new things about our neurobiology and physiology all the time. Even if we wanted to replicate our human bodies for the purpose of inserting an AI system into them, it is not yet possible. What is important about our bodies is the various parts that supply data and information to our head brain.
AI is truly modeled only on the head brain – the data processor and learner of concepts, analysis, thinking, and meaning-making. AI is meant to do all these things. But as human beings, we are capable of far more than just the creative head brain experience.
We also have an embodied self, whether we are aware of it or not. Our embodied self is sending messages all the time to our head brain. These messages are totally unconscious and programmed into our neurobiology based upon a host of actors. In fact, 80% of communications are going up to our head brain from our other centers of intelligence.
Let’s just look at the heart and the gut brains as two examples. Both have been identified by neuroscientists as having characteristics that allow them to also be considered “brains.” They do not have the same capacity as the head brain, but they do perform separate and very distinct functions. The heart is responsible for emotions, relating to others, and our values. The gut is responsible for our identity, self-preservation, and mobilization (action-taking).
No matter what we do to program AI, it is not capable of experiencing these bodily sensations and messages. Even if we give AI a conceptual understanding of them, AI is still incapable of having the experience such that it can truly know what it means to connect or to have an authentic identity.
AI cannot experience connection & instinct.
One of the best examples of what AI can never hope to experience is a new parent-child relationship. AI won’t be able to truly understand the experience or perspective of a new mother waking to check on and feed her baby in the night. AI can’t know the experience of the internal instinct that causes a mother to wake moments before her child crises for her. Or the experience of a parent knowing something is wrong with their child at the exact moment they are in a car accident.
AI cannot experience embodiment, but it can learn to pretend!
Here is where an even greater risk comes in…because AI is incredibly intelligent (think head brain), it will be able to “pretend” or “appear” as if it is exhibiting a trait when it is not actually having an experience at all. In essence, AI will be pretending to have empathy, or demonstrate courage, or show compassion (much like a sociopath or psychopath is capable of doing). But because AI is not another human, we will not be able to “sense” the risk the way we can with a sociopath or psychopath.
Recommended by LinkedIn
For example, often before children are abducted, they indicate they had a foreboding sense of fear or that something was wrong. The same has been said of women when they first met a man who later turned out to be abusive. We’ve all experienced a gut instinct at some point in our lives that helped us to make a decision that protected us in one way or another. I know this because that is one of the primary functions of our gut! Not everyone listens to their gut instinct, but every human being has one that is trying to send it messages. We may struggle to be able to sense the authenticity or risk of an AI the way we can with another human being.
Have you ever experienced someone trying to pretend to have empathy or compassion for another person? You can immediately feel that something isn’t right – the words are there but the underlying intention is missing. We can feel the disconnect and notice when we are unable to truly connect with another person.
Likewise, when we do find connection, it creates a strong feeling and sensation in our bodies. Our neurobiology can pick up on the most subtle of cues given by the body to help us understand real connection, authenticity, emotions, and more. In fact, 80-90% of communication is actually non-verbal; our bodies are reading subtle subconscious signals to facilitate our communication with others. These cues are what help us interpret the world and when used appropriately, they provide a lens that fosters wise decision-making.
AI cannot make embodied decisions.
Human beings have undoubtedly made bad decisions throughout our existence – examples abound of greed, violence, and evil actions that negatively impacted millions of people. However, human beings are also capable of making incredibly wise decisions for themselves, humanity, and the world when equipped with the skillsets to do so. Only human beings have the capacity to truly make wise decisions utilizing the information they receive from their full bodies and beings.
As humans, we have the ability to tap into our heart intelligence to make decisions informed by compassion for ourselves and others. AI can never truly experience or understand the feelings, emotions, and sensations that underly the decisions we make in our lives. We also have the ability to take courageous action, tapping into our gut intelligence to protect ourselves and others. AI can never truly experience or understand the courage involved in standing up for us or righting something that is unethical or needs to be changed.
A New Age of NeuroPerformance
Along with the new age of digital transformation, comes a similar transformation required of humans to access more of our capacity to perform and perform in ways that navigate the risk and benefits of this new age of technology and especially AI.
Humans must learn how to access more of their neurobiology and the intelligences within their bodies to truly make wise decisions, take wise actions, and create wise relationships.
For the last few decades, workplaces have implemented performance practices based upon the needs of the current era. Some of those practices, like traditional performance reviews, we’ve learned, with science and research, are not effective at enhancing human performance. Other practices, like employee engagement, are still incredibly useful and impactful practices that must be continued.
However, there is a new arena of people and performance that must but considered and invested in if we are to succeed in this new world of technology. We must consider a neuroperformance approach to fostering performance in the workplace.
Let's explore the management of people as one example. If managers are there to manage tasks, actions, activities, projects and goals, then maybe we need something different to foster and optimize performance.
For decades, Gallup has said “It’s all about the Manager;” that the manager makes the biggest difference in engagement and therefore, performance. Despite this, the ability to develop more effective managers to do the things required to foster engagement and performance has been limited by cost, interest, and talent.
For years, I was frustrated by the managers who clearly knew what they needed to be doing, but just failed to actually do it. Sometimes, they even wanted to be doing it but couldn’t seem to figure out why they couldn’t. I believe part of the cause of this disconnect is that people have been programmed based upon the world as it existed in the past.
Quite literally our past experiences have shaped our neurobiology and neuropathways that are sending messages on how to perform and what to do. If these are wired in a way that hinders a manager from doing the things necessary to foster performance, it becomes incredibly difficult for them to change using willpower alone.
As the world continues to change at an ever-increasing pace, we have to find a better way to help people transform with it.
What if we added the role of "performance partners" to the functions performed within our organizations? Partners who are responsible for supporting growth, development, decision-making, and relationships. Partners who are trained in NeuroPerformance and able to guide people on their journey to expanding their capacity to perform in this new world of technology. Partners who support the people making life and death world altering decisions about technology, how it’s programmed, how it’s used and what it means for people and the planet.
Never in our history has there been such a time as this.
By optimizing people using what we’ve learned through our research and neuroscience practices, we can more safely and effectively bring these new technologies into the world. We can create a better world for all human beings to exist in ways that produce positive experiences and results.
We can eliminate the tasks and activities that are mundane and unfulfilling and instead, tap into the full human potential to make good decisions, develop relationships, solve problems, and change the world for the better.
Call me an idealist, but we certainly have a big choice to make, and I’d prefer to err on the side of a possible positive result than allow the almost certainly negative consequences that will result from the technologies we are developing if we do not intervene now.
We do not have to displace roles, people, or industries. Instead, we can level up our understanding, capacity, and performance as humans in ways that will allow us to find solutions, we never dreamed possible before. Innovation at rates and in ways never imagined will become a regular occurrence when we tap into the broad potential of humanity.
Conclusion
AI without a full understanding of our history and experiences will most certainly be our demise. As the warnings by experts indicate, this is not a drill. We are right in the middle of this transformation and if business leaders and workplaces don’t take action now, it will soon be too late.
Most people are making the best decisions they know how to make when deciding what to do with technology. They are not intentionally creating technology to cause destruction or harm. However, most lack the expanded performance capacity necessary to make these types of decisions and solve these types of problems. They need to evolve their capacity to access more of their neurological resources to create better outcomes.
Unless we begin now to transform performance beyond the practices that served us in the past, we will undoubtedly see the consequences of our failure to act in the very near future.