Artificial (Emotional) Intelligence.
The windows were ajar, gleaming white against the fresh grass outside that seemed to extend a little way into the house. A breeze blew through the room, moving curtains in at one end and out the other like pale flags, twisting them toward the frosted wedding cake of the ceiling, and then rippling over the wine-colored rug, casting a shadow on it as the wind does on the sea.
F. Scott Fitzgerald used words to ‘design’ moments like this one from his novel “The Great Gatsby” to emotionally connect with his audience.
According to designers of IDEO, a leading global design consultancy that created the framework for Design Thinking, Experience Design is “the crafting of distinct, memorable, and transformative moments in time.” We, as designers of anything digital or physical that humans interact with, orchestrate sensory inputs to create these moments in real life.
How much can/should we rely on AI to help us in this pursuit? I have been fascinated by the pace at which AI has become designers’ collaborative partner. A partner who has read all the novels, movie scripts, and articles, seen an infinite number of visuals, and can remember it all at any given time to extract patterns. Like many designers, I have a mix of excitement and cautious curiosity about how this partnership might evolve from here.
AI has become a collaborative design partner who has read all the novels, movie scripts, articles, seen an infinite number of visuals, and can remember it all at any given time to extract patterns.
Transformation of the Design Process
Currently, the role of Generative AI in design is more prominent in the divergent phases of the creative process, where it offers great benefits on how quickly a vast number of ideas can be generated and communicated. Computing systems are transforming from mere tools with which designers can create, to collaborative partners that can help learn, ideate, and visualize.
The designer’s role in this partnership is poised to transform into a more curatorial one in the convergent phase - selecting the most relevant, feasible, emotionally connecting options, and creating elegant solutions out of this set of possibilities. For designers who can harness AI’s power, this will allow them to produce much more in much less time. This is in line with the theory “AI will not replace designers, but designers who can use AI will” comes from. The key here is knowing what to ask and how to ask it the right way. It would be fair to say this increases the significance of more contextual theory courses within design schools, and it will be interesting to see how design education will adapt to these new post-AI creative norms.
The designer’s role in this partnership is poised to transform into a more curatorial one in the convergent phase - selecting the most relevant, feasible, emotionally connecting options out of this set of possibilities, and creating elegant solutions.
AI and Human Emotions
Human experiences are stamped with emotions. We already know that we better remember and connect with an interaction (whether it is with technology, objects, or people) when we associate it with certain emotions - consciously or subconsciously. Designers and artists who deal with emotions often use surprise, the thrill of new, sense-making, association, context building, and a sense of achievement in instilling emotion to experience moments.
In its current state, Generative AI’s process is quite linear, selecting the most likely outputs to a string of inputs and coming up with options that make sense. So far, its outputs have mainly been single-mode, whether through words, images, or sounds - however human emotions are much more complex than that. In any given second, our brain and body process a multitude of sensory inputs. As in his book "The Eyes of the Skin," Juhani Pallasmaa puts it - “A walk through a forest is invigorating and healing due to the constant interaction of all senses and modalities - polyphony of senses."
Considering the progress made in just over a year since the initial launch of ChatGPT, it wouldn’t be foolish to anticipate that there will be big leaps for multi-modal Generative AI in the short to mid-term future to create more elaborate, cohesive experiences that cater to human emotions.
The Future of AI-Powered Human-Computer Interaction
Predicting the outcome of Generative AI’s effects on how humans interact with computers is not the easiest task. After all, we are talking about a new technology that can process 3.7 million chemical composition alternatives within 6 weeks in search of creating breathable oxygen with elements that already exist on Mars - a task that would have taken a human scientist 2000 years.
While we can only imagine what AI will bring to HCI in the longer term, we can still make some moderate guesses that are plausible to happen in the near to mid-future.
Here are 3 ways it can affect how technology serves us:
Recommended by LinkedIn
1) The Connected World Gets Smarter
The power of AI comes from the size of the data pool it can pull patterns from to influence outputs, which is handy when it comes to creating meaningful connections between existing information and a user’s momentary needs and goals. AI will streamline making sense of existing information across multiple platforms and allow the creation of more informed interaction patterns that can, in the end, improve the quality of the overall user experience and provide more value to businesses in shorter timeframes.
AI will streamline making sense of existing information across multiple platforms and allow the creation of more informed interaction patterns that can, in the end, improve the quality of the overall user experience and provide more value to businesses in shorter timeframes.
For a recent project I worked on during my time at Frog Design, while exploring possibilities for a future of retail scenario, we imagined a physical product label that recognizes who is looking at it through sensors and spatial computing. Then the label uses inputs like past purchase data, nutritional facts, allergy restrictions, responsible manufacturing details, or loyalty status collected from multiple platforms. The AI processes all the information and updates the label design to prioritize the information you care about most - making the purchase decision simpler and that much more informed.
2) Constant optimization through user interaction and sensor data
We, as experience designers, are trained to study and understand users’ needs, goals, and behaviors and use this information to create the best possible solution for our target audience and the brands we connect them with. This information is sometimes limited as the research we conduct relies on the sample pool, timeframe, test scenarios, etc. In other words, we try to design for scenarios that will cover most of the situations that a product or service will be used for, while some less common cases may inadvertently be paid less attention.
For a minute, let’s think of user interaction and sensor data as prompts. By constantly learning from the user and optimizing the experience within a set of constraints defined by the designer, AI can help us expand product use cases beyond the most common ones. It can allow us to cater to exceptions in user profiles and even exceptional moments across a user’s interaction journey - like temporary disablement or mood changes due to hormonal activity throughout the day.
By constantly learning from the user and optimizing the experience within a set of constraints defined by the designer, AI can help us expand product use cases beyond the most common ones.
In real-world scenarios, this could come into life anywhere from an app that considers your mood as it curates the news for the day, to a driving interface that amplifies certain visual or audio indicators when it senses you’re getting tired, a food delivery app that makes suggestions depending on your hormonal levels or upcoming schedule, a smart shower that adds minerals into the water to compensate for increased levels of stress, or a productivity app that makes subtle UI adjustments at the individual level to adapt to each person’s most efficient working method.
3) AI-Generated Multimodal Experiences
As Human-Computer Interactions move outside of glass screens into our physical world, the way we communicate with computers also evolves. Through ambient interactions and spatial computing, we are looking for new ways for objects around us to be integrated into our daily routines. This brings in the possibility to expand these interactions from the two main modes (visual and audio) to other senses including tactile, scent, and taste.
With that in mind, the next big moment in Generative AI for creativity could be harmonizing sensory inputs across multiple interaction modes to create more cohesive, elaborate experiences. Just last week, Google introduced Gemini Pro for Bard, an experimental multi-modal AI Model to do exactly that: "to understand and reason about the user's intent, use tools, and generate bespoke user experiences that go beyond chat interfaces.”
...the next big moment in Generative AI for creativity could be harmonizing sensory inputs across multiple interaction modes in order to create more cohesive, elaborate experiences.
As AI becomes more literate in these different modes, we can expect it to understand the implications of different variations for each mode through data and suggest pairings and recipes to amplify their individual outputs. Then we are talking about AI’s generative skills expanding to culinary experiences, immersive storytelling, and physical and mental wellness, just to name a few.
The extent of how AI will impact our lives and interactions with technology is still quite speculative. I, for one, have no idea what the experience in design will look like in 5 years. But right now, I have nothing but to embrace what AI can unlock in my process and help me imagine things that I don’t even know exist yet. AI is here to stay, and designers can play a key role in using it for good - collaboratively creating “moments” that elevate human experience to a more advanced level.
P.S. My ChatGPT prompt for the Great Gatsby reference at the beginning of the article was: “Can you give me an excerpt that depicts a happy moment in a novel or a movie from popular culture? It should be so descriptive, it almost makes the audience see it.”
** Opinions expressed in this article are my own.
Senior Managing Director
1yUfuk Keskin Fascinating read. Thank you for sharing