The pitfalls of using AI to influence decisions

The pitfalls of using AI to influence decisions

We rely increasingly on AI systems to affect decision making which once belonged to humans alone. It’s no surprise - we’ve been promised great gains for incorporating AI into our business models: organizational efficiency and productivity gains, higher customer retention, competitive advantage… the list goes on. For example, Mastercard leveraged AI to optimize their hiring processes, leading to a huge lift in talent acquisition. And AI isn’t just being leveraged to influence our own decisions, but the decisions of those around us, and our customers as well. And, old news maybe, but 75% of what is watched on Netflix is from personalized recommendations, and they continue to innovate their personalization features with machine learning and AI. I don’t need to further describe reality, you already know this!

That’s why it’s difficult to write about AI. How do you say something that hasn’t already been said one hundred times? So, I figured I would share a personal experience with you.

NoA recently helped deliver a recommendation system for a large Swedish grocer with the goal of nudging customers to make more climate-friendly shopping decisions, where AI was a central component to the solution. The logic is simple: the customer opts-in to get recommendations, and when they add a product to their shopping cart our AI-integrated model finds a more climate-friendly substitute, and we try to nudge the user to make a switch.

I will share two primary learnings from the development phase of the project:

  1. Using AI for this purpose worked surprisingly well! I’ll admit, I was a skeptic. But models that are generally available today are already of very high quality.
  2. Minor tweaks in the input data caused seriously wild swings in our output from the model. Like, the difference between a project’s success or failure, kind of swings. A Large Language Model (LLM) operates within the context it is given, and the Garbage In, Garbage Out principle is omnipotent when working with AI. I’ll elaborate.

Garbage In, Garbage Out

The ”Garbage In, Garbage Out”-principle in machine learning and AI basically means that the quality of the data used in training models or when generating predictions dictates the quality of your output.

So, as long as you ensure your input data is accurate and clean that should do the trick, right? Not quite.

When working with LLMs, it means that you need to define your parameters and input data in a way that aligns with your objectives for the solution or product. That’s a very generic statement, I know, so let me try to put things into context.

In the aforementioned project we helped deliver, we had to make some decisions on how the model should search for product substitutes. In essence, we had to decide what data to feed the AI model (a simplification, but sufficient to help your understanding). I’ll show a hypothetical example.

If we feed the model just the product names for a fashion store, what happens? T-shirts are most similar to other T-shirts, chinos are quite similar to jeans, and hats are highly dissimilar to shoes. What if we include the brand? This makes things more interesting. Jeans with brand ‘A’ are more similar to jeans with brand ‘A’ than jeans with brand ‘B’, even though they’re all jeans. What if we include the country of origin for the supplier? Hypothetically, is a product with brand ‘A’ from Norway more similar to a product with brand ‘B’, also from Norway, than a product with the same brand ‘A’, but from Sweden? If yes, is that a good or a bad thing?

You can see how these interactions quickly become interesting, and the issue is that we are left completely in the dark about how these AI models weigh the different parameters due to their inherent black-box nature (and due to the fact that model suppliers are not willing to spill their secret sauce). This leaves us exposed. Without appropriate AI governance, we risk significant challenges, including poor decisions for our businesses, eroding trust with our users, and ethical lapses.

So it’s not just about feeding our model clean, accurate data. The context we feed our AI models will greatly affect the recommendations we generate, the decisions we make and how we impact our users. We have to take great care and responsibility in how we define our model’s context window in a way that aligns with our business objectives, agreements and ethical guidelines when working with AI.

Consider this a brief aside or annotation to the topic of bias in AI, which I’m sure we could write a 40-pager on. But, in the spirit of trying to bring something new to the discussion without regurgitation, we’ll stop here. If you want an example where bias in AI went horribly wrong, you can point and laugh at Google’s failure with their Gemini-model for image generation, where ‘cognitive bias’ caused the model to produce wildly inaccurate results.

Julian Cæsar Andersen - Consultant at NoA Connect Norway

Kristina Skogen Tangeraas

CCO, Client Director at Anorak and Partner in NoA - The North Alliance

1mo
Eveline Debora N.

Founder of Kollabrix | Software Engineer | Forbes 30u30 | NATO Youth Summit Speaker | UNLEASH Ambassador | Aspen Fellow

1mo

Great read! Julian Cæsar Andersen thanks for sharing. 🌟

Henriette Stisen

Communication & Journalism

1mo

Thanks for sharing your learnings, Julian Cæsar Andersen

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics