The Evolution and State of AI: A Journey Through Forming, Storming, Norming, Performing, and Reassessing
Artificial Intelligence (AI) has come a long way since its inception, transforming from an experimental concept of "Can Machines Think" by Alan Turing in 1950 to a robust technology shaping industries, economies, and everyday life.
To understand AI's journey, we can look at Bruce Tuckman's team development framework, which outlines the stages of forming, storming, norming, performing, and adjourning (or reassessing). This analogy reveals how AI, much like a team, has evolved through various phases of growth, conflict, resolution, and adaptation, all while introducing new challenges around trust, privacy, autonomy, and the emergence of new cultures and norms.
Illustration of a chart that describes the 5 stages of team development over time and the resulting increase/decrease in team effectiveness. © Recipes for Wellbeing
Forming: The Birth of AI and Trust in Capabilities
The forming stage is when a team comes together for the first time, full of excitement and potential but lacking clarity and trust. The same can be said of AI's early days. In the mid-20th century, AI pioneers like Alan Turing and John McCarthy laid the foundation for what AI could be—machines that could mimic human intelligence. However, during this formative period, AI's potential was largely theoretical, with limited practical application.
As early teams struggle to trust one another's skills, so was trust in AI's capabilities. However, as AI systems evolved and demonstrated their potential, this trust began to grow. Early AI systems could play chess or solve fundamental math problems but were far from today's sophisticated models. AI researchers had to build confidence in the technology's potential while grappling with the limitations of early algorithms and computing power. Moreover, trust in data—the fuel for AI—was minimal because AI systems required large, high-quality datasets that were scarce at the time. However, as the technology advanced, this trust in data quality and AI's capabilities also grew, reassuring us of its potential.
Storming: Conflicts Over Trust, Data, and Privacy
In the storming phase, conflicts arise as team members challenge roles, goals, and processes. AI entered its storming phase as it transitioned from experimental models to real-world applications. With the rise of machine learning in the 1990s and early 2000s, AI began to show more tangible results, but this progress brought challenges to the surface, particularly around trust in data, insights and privacy.
As AI models became more data-driven, the need for vast data led to concerns about data quality, bias, provenance, and transparency. This mirrored the trust issues teams face when roles and responsibilities aren't clearly defined. AI systems were found to reflect biases in their training data, leading to distrust in their decisions. For instance, biased algorithms in hiring or law enforcement ignited distrust in AI's fairness and accountability.
At the same time, privacy concerns intensified as AI systems began to handle more personal data. The trade-off between innovation and the protection of personal information became a central issue. Governments and organizations struggled to regulate the fast-evolving technology, leading to public skepticism and regulatory challenges, much like a team struggling to define rules of engagement. Privacy concerns led to research into Privacy Enhancing Technologies (PET) such as Tokenization, Differential Privacy, Federated Learning, Synthetic data, Confidential Computing, and Homomporhic Encryption.
Recommended by LinkedIn
Norming: Building Trust and the Onset of New Cultures
The norming stage is when a team starts to resolve conflicts and establish clear roles, building trust along the way. For AI, this phase represents the current era, where AI has found widespread adoption across industries, and the focus has shifted toward refining the technology, creating new norms, and embedding it into organizational cultures.
Today, trust in AI's capabilities has grown, largely thanks to the success of Deep Learning models, Natural Language Processing (NLP), and AI-powered systems like chatbots, recommendation engines, and autonomous vehicles. Companies have built robust AI strategies, incorporating AI into decision-making processes and everyday operations. Alongside this, new cultural norms have emerged around data-centric decision-making and automation. The rise of roles like data scientists, AI ethicists, and machine learning engineers reflects this cultural shift. We are in the age of workers augmented with AI Agents that can comprehend the context and reason and take necessary action.
However, trust is still being built around data governance and privacy. Governments have stepped in to regulate AI through policies like GDPR (General Data Protection Regulation) and California's CCPA (California Consumer Privacy Act), and EU AI Act. These regulations have established new norms for how AI can collect, store, and use personal data, helping restore trust and create a more secure environment for AI development. The role of policymakers in shaping the future of AI and ensuring its ethical use is crucial and will continue to be a significant factor in AI's evolution.
Performing: AI's Autonomy and Its Transformative Impact
In the performing stage, a team reaches peak efficiency, working seamlessly toward its goals. AI is now entering this phase in several key areas. Autonomous systems, from self-driving cars to AI-driven healthcare diagnostics, are not just theoretical but operational. The rise of AI autonomy in decision-making—whether in predictive analytics, real-time language translation, or robotic process automation—illustrates AI's growing capabilities and impact.
However, as AI systems become more autonomous, new questions arise about the balance of trust and accountability. Who is responsible when an AI system makes a mistake, primarily when it operates independently? This stage brings forth complex ethical and legal challenges that need to be addressed, just as a high-performing team must continually reassess how to distribute responsibility and decision-making power.
Adjourning or Reassessing: Reflecting on AI's Impact and Future
The final phase, adjourning or reassessing, is a moment of reflection and planning for what comes next. AI is at a pivotal point where the collective community reassesses its impact and considers the future. Issues of privacy, trust, and ethical and responsible use remain central to discussions about AI's long-term trajectory. With advancements in generative AI, such as large language models like Generative Pre-trained Transformer (GPT), there's a growing need to reassess how we handle data, ensure AI's transparency, and prevent its misuse.
This reassessment also involves thinking about how AI will shape new societal norms in the future. As AI continues to evolve, it will require constant collaboration between technologists, ethicists, policymakers, and society. This involvement is crucial to ensure that AI remains a force for good, aligned with human values and ethical principles, making the audience feel responsible and part of the process.
Conclusion
The evolution of AI closely follows the team development framework of forming, storming, norming, performing, and reassessing. From its early experimental days to its current transformative role, AI's journey reflects the challenges of building trust, safeguarding privacy, navigating autonomy, and establishing new cultural norms. As AI moves forward, the reassessment phase will be crucial in ensuring that it continues to develop in ways that benefit society, promote ethical use, and build lasting trust.
Builder of AI, Cloud & Smart Contract Factories
2wI’ve been waiting for someone to adapt this great framework to AI. Thank you for doing it! 👌
Senior Media Strategist & Account Executive, Otter PR
2moGreat share, Krishna!
Corporate America’s CFP® | Tax Efficiency | RSUs/Stock Options | Retirement Planning | Generational Wealth Building | CLU® | Growth & Development Director | Building a high performing firm in San Antonio
3moGood read! AI’s evolution is kind of like team growth...there’s always that rough patch before things click together! Its a process
Technology Executive - Global Product Head (Security & Data) | Executive Board Advisor | Speaker | Strategy & Execution
3moGood read and interesting insight Krishna
Cybersecurity Executive | 2x Founder | Startup & VC Advisor | Mentor | Investor | Keynote Speaker
3moThis is a great read, thanks for sharing, Krishna!