Design for AI: Understanding Mental Models
AI, let’s write Artificial Intelligence (for search engines), is changing the way people interact with technology, and designers must stay ahead of the curve.
One key aspect of designing for AI is understanding mental models—our internal representations of how systems work. Mental models help people to predict outcomes, interact smoothly with interfaces, and reduce cognitive load. They help to move on autopilot. When it comes to AI, mental models are in a state of flux, shaped by both current technology and the speculative nature of future AI capabilities.
Let’s try to describe some existing mental models around AI and predict new models that are forming as AI evolves. Each mental model is useful for product design, ensuring that AI systems are intuitive, trustworthy, and easy to use.
🟢 The Magic Box Mental Model
Hah?
This is perhaps the most common mental model people have about AI today—AI as a magic box, no one understands how it works but it can do anything. Users input something (e.g., a question, task, or image), and AI outputs something remarkable, almost as if by magic. The inner workings of AI remain mysterious creating excitement and fear.
What?
Think of chatbots like ChatGPT or AI art generators like DALL·E. Many users don’t fully grasp how these models work; they simply provide prompts and receive sophisticated outputs. The complex algorithms behind the scenes are not visible to them.
Why?
Designing for this mental model requires balancing transparency and usability. While users expect magical results, it's essential to build trust by setting or clarifying expectations and reminding that AI doesnt read user’s mind and can not do a good magic without even better explained request.
Explainable AI (XAI) design principles can help here. For example, providing “Why this result?” explanations in search engines or recommendation systems can demystify AI just enough to make users feel confident without overwhelming them with technical details.
What are other principles?
🚩 Transparency
AI systems should openly share how they function and make decisions. Transparency allows users to understand the inputs, processes, and factors behind AI outputs.
In healthcare AI systems, for example, doctors need to see how the AI arrives at diagnoses, including which data points (e.g., lab results, medical history) were most influential in the decision-making process. This could be achieved through visual dashboards that show the weighting of different factors.
🚩 Tailored Explanations
Explanations should be adaptable to the user’s expertise and context. For example, different user types (e.g., engineers vs. end-users) will require different levels of detail in AI explanations.
For instance, a digital pathology AI system might provide technical details to doctors about how a diagnosis was made while offering a simplified, layperson-friendly explanation to patients. This ensures both parties get the information they need in a format they understand.
🚩 Justifiability
AI systems should be able to justify their decisions, especially in high-stakes scenarios. This means providing clear, logical reasons for why a particular outcome was reached.
In financial services, when a customer’s loan is rejected by an AI model, the system should explain the specific factors (e.g., low credit score, high debt-to-income ratio) that led to the rejection. This can help users not only understand and possibly correct their situation but also not hate AI.
🚩 Error Awareness and Feedback
The system should identify potential errors, uncertainty, or the potential for delivering wrong results in its outputs and communicate this to the user upfront, allowing for human oversight or corrections.
For example, in autonomous driving AI, the system should alert the driver when it encounters ambiguous situations, like low-visibility conditions, and suggest manual control. This feedback loop builds trust while maintaining safety.
🚩 User Control and Interactivity
Users should have some control over the AI system’s decisions and outcomes, including the ability to query or correct decisions when they believe the AI has made an error.
In image recognition software, for instance, a user can correct a misidentified object, allowing the AI to learn from the correction. Over time, this helps improve the system’s accuracy while giving the user control over outputs.
🚩 Actionable Explanations
Explanations should empower users to take actionable steps. When an AI system delivers an unfavorable outcome, users should be informed on how to improve future results.
AI systems used in hiring might inform candidates why their application was unsuccessful and provide advice on skill improvements or additional qualifications to enhance their chances next time. This will not only remove a dead end in the experience but also humanize it.
🚩 Continuous Learning and Adaptability
AI should evolve with user feedback, continually improving the relevance and accuracy of its outcomes. This adaptability helps AI systems better meet user needs over time.
Customer service AI platforms could learn from user interactions, adjusting how they present solutions based on previous user responses and preferences, leading to increasingly personalized and effective interactions.
🚩 Cohort and Global Explanations
Explainable AI should provide explanations at multiple levels—local (specific to one user or decision), cohort (explaining decisions for a subset of users), and global (explaining the overall decision-making logic).
In credit scoring, AI could explain individual decisions (local), identify trends across similar profiles (cohort), and show general decision rules (global), offering a comprehensive view of how the system operates at various scales.
🚩 Contextual Awareness
Explanations should be contextual and relevant to the specific user situation, taking into account the environment, the task at hand, and the individual’s prior interactions with the system.
In AI-driven navigation systems, for example, when a route is suggested, contextual explanations—such as real-time traffic data, road closures, or user preferences (e.g., avoiding tolls or having a great view)—should be provided to explain why one route is recommended over another.
🟢 The Human-Like Mental Model
Hah?
AI often gets anthropomorphized. Users expect AI to behave like humans—understanding context, showing empathy, marrying them, and even having personalities. This mental model is formed by AI assistants like Siri and Alexa, where users expect conversational and emotional interaction rather than a mechanical process.
What?
When people interact with AI-driven voice assistants, they often use polite phrases such as “please” and “thank you,” expecting the AI to respond in a human-like manner. Similarly, users might get frustrated when AI doesn’t pick up on subtle cues in conversation, as they would expect from a human.
Recommended by LinkedIn
Why?
Product teams must carefully plan AI's behavior to manage these human-like expectations. Too much anthropomorphism can lead to frustration when AI inevitably fails to meet human emotional complexity. Thoughtful persona design is crucial—AI can be friendly and approachable without pretending to be something it’s not. Using visual cues, tones, and clear boundaries between AI's capabilities and human interaction can help manage user expectations.
🟢 The Co-Pilot Mental Model
Hah?
In this mental model, AI is seen as a collaborative partner, augmenting human capabilities rather than replacing them. Users perceive AI as a tool that can assist in decision-making, creativity, and problem-solving, working alongside them as a co-pilot. No wonder Microsoft reserved this term for their product.
What?
Think of AI systems like GitHub Copilot for coding or Grammarly for writing. These tools are designed to support and enhance user capabilities without taking full control of the task. The user remains in charge, while AI offers suggestions or alternative ways to achieve the goal.
Why?
Designers need to craft interfaces that allow users to feel empowered rather than overshadowed by AI. Co-pilot mental models work best when AI suggestions are seamlessly integrated into the user’s workflow, with an option to accept or decline recommendations. Feedback loops should be implemented to ensure users understand the value AI adds to their tasks while also giving them full control over final decisions.
🟢 The Personal Assistant Mental Model
Hah?
Users increasingly view AI as a personal assistant 24/7—analyzing data to anticipate needs, make recommendations, and streamline processes before they even ask for help. AI here becomes a proactive trustful assistant, making the experience more personalized and efficient.
Why?
Netflix's recommendation engine or Google’s predictive text in Gmail are nice examples. The AI analyzes past behavior and preferences to predict what the user will want next, whether that’s a new show or the completion of a sentence. But the best example can be anticipated and awaited Apple Intellgence.
Even today, Apple devices, through various features integrated into iOS and macOS, leverage the Personal Assistant mental model to create a seamless and personalized user experience. The core idea behind this mental model is that AI anticipates user needs, making suggestions or completing tasks proactively without requiring explicit commands.
For example, Siri Suggestions offers personalized recommendations based on user behavior. Siri proactively analyzes patterns in how and when apps are used and then offers suggestions at relevant moments. For example, Siri might recommend launching a specific app at a certain time of day based on your routine—such as suggesting a workout app when it knows you typically exercise in the morning.
Apple's predictive text feature learns from a user’s typing habits and suggests words or phrases to help users complete sentences faster. This feature analyzes previous conversations and writing patterns to predict what the user intends to say next.
The Photos app on Apple devices uses machine learning to proactively create Memories, or curated collections of images and videos that are likely to have sentimental value to the user. These memories are generated based on factors like location, time, and face recognition.
Apple Intelligence is supposed to boost this mental model and supply users with truly proactive assistance. By combining advanced generative AI with deep personal context, Apple enhances Siri's abilities to perform tasks across apps and anticipate user needs—like suggesting reminders, booking reservations, or adding contact details from a text. Features such as personalized notifications and system-wide predictive actions will offer contextually relevant actions - as by Personal Assistant.
What?
Predictive AI is all about relevance and personalization. Designers should focus on minimizing the friction between prediction and user intention. The design must accommodate some level of personalization control—allowing users to fine-tune or adjust predictions while ensuring the AI remains accurate and helpful. Avoid overwhelming the user with too many recommendations and ensure they can easily dismiss irrelevant suggestions.
🟢 The Teacher-Learner Mental Model
Hah?
As AI becomes more sophisticated, an emerging mental model is the idea of a two-way learning system where both the user and the AI continuously adapt to each other. Users see AI not only as something that helps them but also as something they can teach to better serve their needs. This is akin to the AI learning from user behavior and refining its actions over time.
Why?
An advanced voice assistant that not only follows commands but remembers your preferences over time. For instance, after correcting it a few times, the AI starts to recognize your preferences for home lighting, music, or food orders without needing explicit instructions each time.
What?
This model introduces the concept of feedback loops and long-term personalization. Designers must create interfaces that allow users to train AI without feeling like they’re doing extra work. Subtle prompts for feedback, like “Was this helpful?” (super boring, but do we have better examples?) or personalized settings that evolve over time can help users feel that the AI is learning and adapting to their preferences.
🟢 The Guardian Mental Model
Hah?
As AI evolves, there is growing potential for a guardian mental model, where users perceive AI as a protective entity. This AI doesn’t just assist but actively safeguards users from errors, malicious actors, and even themselves. It could be responsible for ethical decision-making, security, or even moral judgments.
Why?
Consider autonomous driving AI that not only drives the car but also makes split-second decisions to protect passengers in life-threatening situations. Or think of AI in healthcare that monitors patient data in real time, preventing potentially dangerous mistakes before they occur.
What?
Designing for this mental model requires a focus on trust and reliability. AI needs to feel responsible and give users peace of mind. Transparency about how decisions are made is crucial, as is clear communication in critical moments. For example, a self-driving car AI might need to clearly explain why it made certain decisions in response to an unexpected situation. This mental model also pushes designers to consider the ethical implications of AI and how to communicate these to users in a way that maintains trust.
Any other mental models?
Afterwords
Designing for AI requires an understanding of the mental models that users develop when interacting with technology. Whether AI is seen as a magical entity, a co-pilot, an assistant, or a guardian, these mental models shape how users perceive, trust, and use AI systems. As designers, we must anticipate these models and use them in the design process.
References:
Social Media Analyst at oxygen ites pvt ltd
4moLove this post! Mental models are so crucial in AI design. As someone passionate about AI, I've found SmythOS's approach to designing collaborative AI agents really fascinating. It's amazing how they've made complex AI workflows more intuitive and accessible.