We are GenAI's System 2
The world is trying to understand the potential of Generative AI, and many resources —most—are going into improving AI models. However, exploring how human-machine collaboration can enhance accuracy and insight is also helpful.
One promising direction is leveraging Daniel Kahneman's System 1 / System 2 Thinking, distinguishing more intuitive and faster thinking modes from more reflective and slower ones. While AI companies are considering this framework in their algorithm-enhancing research, I want to focus on the immediate opportunity for most organizations and users: AI-augmented Collective Intelligence (ACI). That means ensuring humans are in the loop as System 2 to complement machines' System 1 (and with the new OpenAI GPT o1 model, possibly "System 1.5").
Thinking fast and slow is how cognition happens
Kahneman's model, from his seminal book Thinking, Fast and Slow, distinguishes between two modes of cognitive processing:
System 1 Thinking:
System 2 Thinking:
These aren't separate brain systems but conceptual models that illustrate how we process information. They often work in tandem, influencing each other and sometimes operating simultaneously. While both can be prone to errors and biases, System 1 is more subject to them because of its speed.
This is also not a watertight divide. Specific System 2 processes can become more automatic with practice, resembling System 1's efficiency. Moreover, the distinction between these systems is more of a continuum than a strict divide. This said they are helpful as a principle for what we need.
Computers are faster. Humans can make them more logical and deliberate
In the context of human-AI collaboration, integrating System 1 and System 2 thinking offers a helpful framework:
AI's System 1 Input to Collective Intelligence
Humans' System 2 Input to Collective Intelligence
We aren't just talking about user interfaces. We are talking about designing a more deliberate synergy process, one where humans are supported holistically (UI, UX, AI itself) in their role as critical thinkers, for instance, asking us questions, guiding us through a problem-solving flow (here and here), or using us to improve quality control, among others. And crucially, doing so not just one-to-one but also in groups and networks of people.
This is what I call ACI (augmented collective intelligence) System 1/2.
Recommended by LinkedIn
As it often happens with generative AI, there was a recent turn of events that might change things: the launch of more powerful reasoning models, like OpenAI o1. While these are early days, the new models show that incorporating typical human "system 2" thinking methods helps the AI achieve more sophisticated reasoning, planning, and complex problem-solving. This puts the threshold for humans higher, but it doesn't eliminate the value that we bring as custodians of System 2. For now, I see the new AI models as moving across the continuum between System 1 and 2 - a sort of System 1.5, to put it crudely.
Warning: Humans stay in, not on, the loop
Several significant challenges still need to be addressed despite the potential of integrating System 1 and System 2 thinking into human-AI collaboration.
1. Difficulty in Transitioning Between Systems: One of the main hurdles is that many people struggle with the handover between fast, intuitive thinking and slower, analytical reasoning, particularly when guiding machines in real-time. The smooth transition required to optimize human-AI collaboration is not an inherent skill for most individuals. This difficulty often leads to inefficiencies and errors when working with AI systems. For instance, people fall prey to biases and use cognitive shortcuts when reviewing AI's output.
2. Risk of Human Oversight: Recent research highlights a critical risk: the potential for humans to become overly reliant on AI, leading to a phenomenon often described as "falling asleep at the wheel." When humans overly trust AI to handle tasks, they may disengage from critical thinking, reducing their ability to catch mistakes or make nuanced judgments. This over-reliance can be dangerous, particularly in high-stakes environments where vigilance is crucial.
3. Lack of Awareness and Knowledge: We often need to design end-to-end processes across tools. Predictive (classic) AI, Robotic Process Automation, Business Intelligence tools, and Generative AI have their place in many processes if the flow is designed intentionally. The landscape of AI tools is vast and complex, and without proper understanding, users may misuse these tools or fail to use them to their full potential.
The Way Forward: Technology, Process, and People
Several solutions can be implemented to overcome these challenges.
Developing Hybrid Systems: Beyond improving user interfaces, it is also crucial to consider hybrid systems—combinations of AI and human input that dynamically shift between different types of processing.
Building Scaffoldings for Human-Machine Interaction: Creating supportive structures or frameworks is essential for smoother interaction between humans and machines.
Building Skills for Augmented Thinking: To capitalize on these advanced systems, we must invest in developing the skills needed for augmented thinking.
Humans, in groups and individually, are GenAI's System 2, at least for now. Let's organize ourselves accordingly.
This article is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here. Get in touch if you want these capabilities to augment your organization's collective intelligence.
Leading AI Strategist and Tech Visionary | Researcher & Advisor | Operations & Change Management | Board Member | International Speaker & Author
3moAn excellent overview, Gianni Giacomelli! 🙏
CEO Ibbaka Performance - Leader LinkedIn Design Thinking Group - Generative Pricing
3moYes, the collaboration and learning how to collaborate as individuals and teams is one of the highest priorities for genAI adoption.
I empower Engineering Leaders with Cloud, Gen AI, & Product Engineering.
3moNavigating AI implementation in large-scale digital transformation projects has been quite the journey. By focusing on using AI for repetitive tasks while keeping humans in charge of strategic decisions, we’ve seen incredible results. Just goes to show: Balancing automation with human judgment isn’t about rushing—it's about strategic alignment and thoughtful deployment. Embrace the process and make it as sustainable and ethical as possible. Keep leading with purpose and wisdom.
Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation
3moExcellent, thank you Gianni Giacomelli. I have been thinking more about System 1/ System 2 as a metaphor, I am seeing ti crop up a lot now, as it makes a lot of sense. The second half of your article, from the warnings down, lays out many of the specifics we need to be working on. We have some decent definition to the next phase.