We are GenAI's System 2

We are GenAI's System 2

The world is trying to understand the potential of Generative AI, and many resources —most—are going into improving AI models. However, exploring how human-machine collaboration can enhance accuracy and insight is also helpful.

One promising direction is leveraging Daniel Kahneman's System 1 / System 2 Thinking, distinguishing more intuitive and faster thinking modes from more reflective and slower ones. While AI companies are considering this framework in their algorithm-enhancing research, I want to focus on the immediate opportunity for most organizations and users: AI-augmented Collective Intelligence (ACI). That means ensuring humans are in the loop as System 2 to complement machines' System 1 (and with the new OpenAI GPT o1 model, possibly "System 1.5").


Thinking fast and slow is how cognition happens

Kahneman's model, from his seminal book Thinking, Fast and Slow, distinguishes between two modes of cognitive processing:

System 1 Thinking:

  • Fast, automatic, and intuitive: This mode operates almost effortlessly, drawing on instincts, emotions, and past experiences to make quick decisions. In moments of stress or danger, this also means keeping us out of trouble. Like driving your car and making quick decisions if something unusual happens.
  • Unconscious processing: it functions beneath the surface, handling routine tasks and swift judgments without deliberate thought.

System 2 Thinking:

  • Slow, deliberate, and analytical: This mode requires conscious effort and is used for more complex problem-solving and decision-making. Like finding a win-win solution to a complex negotiation with a supplier.
  • Logical and rational: System 2 engages when we need to consider information, analyze data, and weigh options carefully.

These aren't separate brain systems but conceptual models that illustrate how we process information. They often work in tandem, influencing each other and sometimes operating simultaneously. While both can be prone to errors and biases, System 1 is more subject to them because of its speed.

This is also not a watertight divide. Specific System 2 processes can become more automatic with practice, resembling System 1's efficiency. Moreover, the distinction between these systems is more of a continuum than a strict divide. This said they are helpful as a principle for what we need.


Computers are faster. Humans can make them more logical and deliberate

In the context of human-AI collaboration, integrating System 1 and System 2 thinking offers a helpful framework:

AI's System 1 Input to Collective Intelligence

  • Routine Tasks and Automation: Just as System 1 handles routine tasks automatically in humans, AI can efficiently manage repetitive tasks such as data entry, sorting, or preliminary data analysis. This automation frees humans to focus on more complex challenges.
  • Instantaneous Responses: AI provides quick, heuristic-like responses to straightforward queries, mirroring System 1's rapid decision-making. It will likely tend to choose standard, "safe" answers. For instance, this capability is particularly valuable in customer service or real-time data monitoring.
  • High-Volume Pattern Recognition: AI’s strength in identifying patterns in large datasets parallels the intuitive pattern recognition of System 1 thinking. For example, in market research or employee experience analyses, GenAI can identify conversation patterns across many respondents, enabling analysts to engage with that corpus more effectively. Clearly, GenAI gets many patterns wrong, which requires humans to be in the loop (more below.)

Humans' System 2 Input to Collective Intelligence

  • Complex Problem-Solving: AI can process vast amounts of data, distilling insights humans can then analyze using System 2 thinking. For example, AI could summarize the possible clauses for a contract requiring suppliers to share sustainable sourcing information, and humans could select the most appropriate ones given the relationship with the partner, the background of the company, etc.
  • Strategic Planning: AI aids strategic planning by offering simulations, forecasts, and scenario analyses. These provide the information humans need to engage in deep System 2 thinking, carefully considering various options and their long-term consequences. AI could provide "red team" scenarios if things go wrong in the supplier's relationship, helping buttress solutions.
  • Decision Support Systems: AI is a powerful decision-support tool that provides detailed reports and data-driven recommendations. AI can give a summary of all the status reviews regarding relationships with suppliers that are similar to the ones we are dealing with. Humans can then apply System 2 thinking to evaluate these inputs and make final decisions.
  • Framework-based reasoning: Humans can apply theoretical constructs (e.g., frameworks) as a lens to critique information provided by AI to filter its output and guide further AI work. Importantly, humans can also lead the AI to use specific human-made frameworks to guide its reasoning, hence incorporating symbolic thinking derived from human research. For instance, AI can be asked to look at options with specific lenses (e.g., "triple bottom line" in the case of sustainable sourcing).
  • General critique: GenAI makes mistakes, and human critique and quality control are valuable - even just in the form of requiring other, unrelated, and possibly specialized models to double-check the initial model output.

We aren't just talking about user interfaces. We are talking about designing a more deliberate synergy process, one where humans are supported holistically (UI, UX, AI itself) in their role as critical thinkers, for instance, asking us questions, guiding us through a problem-solving flow (here and here), or using us to improve quality control, among others. And crucially, doing so not just one-to-one but also in groups and networks of people.

This is what I call ACI (augmented collective intelligence) System 1/2.

As it often happens with generative AI, there was a recent turn of events that might change things: the launch of more powerful reasoning models, like OpenAI o1. While these are early days, the new models show that incorporating typical human "system 2" thinking methods helps the AI achieve more sophisticated reasoning, planning, and complex problem-solving. This puts the threshold for humans higher, but it doesn't eliminate the value that we bring as custodians of System 2. For now, I see the new AI models as moving across the continuum between System 1 and 2 - a sort of System 1.5, to put it crudely.


Warning: Humans stay in, not on, the loop

Several significant challenges still need to be addressed despite the potential of integrating System 1 and System 2 thinking into human-AI collaboration.

1. Difficulty in Transitioning Between Systems: One of the main hurdles is that many people struggle with the handover between fast, intuitive thinking and slower, analytical reasoning, particularly when guiding machines in real-time. The smooth transition required to optimize human-AI collaboration is not an inherent skill for most individuals. This difficulty often leads to inefficiencies and errors when working with AI systems. For instance, people fall prey to biases and use cognitive shortcuts when reviewing AI's output.

2. Risk of Human Oversight: Recent research highlights a critical risk: the potential for humans to become overly reliant on AI, leading to a phenomenon often described as "falling asleep at the wheel." When humans overly trust AI to handle tasks, they may disengage from critical thinking, reducing their ability to catch mistakes or make nuanced judgments. This over-reliance can be dangerous, particularly in high-stakes environments where vigilance is crucial.

3. Lack of Awareness and Knowledge: We often need to design end-to-end processes across tools. Predictive (classic) AI, Robotic Process Automation, Business Intelligence tools, and Generative AI have their place in many processes if the flow is designed intentionally. The landscape of AI tools is vast and complex, and without proper understanding, users may misuse these tools or fail to use them to their full potential.


The Way Forward: Technology, Process, and People

Several solutions can be implemented to overcome these challenges.

Developing Hybrid Systems: Beyond improving user interfaces, it is also crucial to consider hybrid systems—combinations of AI and human input that dynamically shift between different types of processing.

  • Adaptive AI Systems: These systems can start with fast, heuristic-based processing (similar to System 1 thinking) for routine tasks and switch to more complex, deliberate processing (akin to System 2 thinking) as the task complexity increases. For instance, an AI system could use quick heuristics to filter and sort data but switch to advanced (and different) models when deeper analysis (and its different flow of prompting and agents across multiple inference cycles) is required. This adaptive approach allows for greater flexibility and efficiency in human-AI collaboration.

Building Scaffoldings for Human-Machine Interaction: Creating supportive structures or frameworks is essential for smoother interaction between humans and machines.

  • User Interfaces: One practical solution our team at MIT is exploring is designing user interfaces and workflows that allow seamless switching between quick, automated responses and more detailed, analytical discussions. This interface empowers users to leverage System 1 and System 2 thinking as needed, promoting a balanced approach to decision-making.
  • Transparency and Explainability: AI systems must offer transparent and explainable outputs, especially when engaging in System 2 tasks. When users understand the reasoning behind AI's recommendations, they can more effectively apply their analytical skills, enhancing their trust in the system and ability to make informed decisions.

Building Skills for Augmented Thinking: To capitalize on these advanced systems, we must invest in developing the skills needed for augmented thinking.

  • Human roles will shift towards orchestration, strategy, and critical decision-making, with machines handling much of the "how" work. Key skills include critical thinking, people management, system thinking, digital literacy, and domain expertise.
  • A new curriculum is needed to prepare individuals and teams for effective collaboration with AI, integrating foundational thinking skills, cognitive flexibility, and adaptive learning to enhance individual and collective intelligence in complex environments.
  • Humans must direct the collective cognitive attention to the right things - the right "whys." We must ensure the approach is right - the right "what." We must critique the "how" that machines will increasingly suggest.

Humans, in groups and individually, are GenAI's System 2, at least for now. Let's organize ourselves accordingly.


This article is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here. Get in touch if you want these capabilities to augment your organization's collective intelligence.


Dr Markus Bernhardt

Leading AI Strategist and Tech Visionary | Researcher & Advisor | Operations & Change Management | Board Member | International Speaker & Author

3mo

An excellent overview, Gianni Giacomelli! 🙏

Steven Forth

CEO Ibbaka Performance - Leader LinkedIn Design Thinking Group - Generative Pricing

3mo

Yes, the collaboration and learning how to collaborate as individuals and teams is one of the highest priorities for genAI adoption.

Hiren Dhaduk

I empower Engineering Leaders with Cloud, Gen AI, & Product Engineering.

3mo

Navigating AI implementation in large-scale digital transformation projects has been quite the journey. By focusing on using AI for repetitive tasks while keeping humans in charge of strategic decisions, we’ve seen incredible results. Just goes to show: Balancing automation with human judgment isn’t about rushing—it's about strategic alignment and thoughtful deployment. Embrace the process and make it as sustainable and ethical as possible. Keep leading with purpose and wisdom.

Ross Dawson

Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

3mo

Excellent, thank you Gianni Giacomelli. I have been thinking more about System 1/ System 2 as a metaphor, I am seeing ti crop up a lot now, as it makes a lot of sense. The second half of your article, from the warnings down, lays out many of the specifics we need to be working on. We have some decent definition to the next phase.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics