On Thursday, I attended the Data+AI Summit by Databricks which focused on how businesses can leverage generative AI, machine learning, and analytics to drive better decision-making. The event showcased innovations like using natural language to derive insights and integrating AI with data management.
I attended to broaden my perspective beyond behavioural science, curious to explore where our field intersects with these advancements. While it was fascinating to experience a space dominated by engineers and data scientists, I couldn’t shake the feeling that something vital was missing from the conversation: a deeper understanding of how humans make decisions.
Here are my 10 takeaways from the event:
- Companies are building these highly sophisticated business intelligence tools to help them make better decisions, but the people driving the development—primarily engineers—aren’t necessarily equipped with an understanding of human decision-making.
- Business intelligence systems need to understand the context behind the questions being asked. Engineers focus on data retrieval, but they might not grasp the nuances of why someone asks a particular question in the first place.
- As these systems become more autonomous and capable of interacting with users through natural language, there’s an even greater need to understand how people ask questions, interpret responses, and use the information to make decisions. Without this understanding, we risk creating systems that are technically brilliant but underutilised because they don't fit naturally into human cognitive patterns.
- Systems designed to aid human decision-making are built by people who don’t fully understand human cognition, decision frames, or information search behaviour. To me it seems like we’re creating these advanced tools but missing a crucial layer that considers the cognitive processes of the end-users.
- Leaving tech solely in the hands of engineers without considering the human side can lead to unintended consequences—like how Facebook’s algorithms were optimised for engagement, which inadvertently amplified divisive content.
- This goes beyond traditional UX—it’s not just about making a system intuitive but about aligning it with how people inherently think, make decisions, and ask questions. It’s about embedding an understanding of how humans actually think, decide, and behave into the very systems we’re using to drive business strategy.
- Right now, where tech and behavioural science intersect tends to focus on user experience or persuasive design – this is about thinking further upstream: how do we design AI systems that are not only intuitive but that align with how people naturally process information, form judgements, and make decisions?
- Maybe one reason this overlap isn’t more visible (or even a recognised field) is because traditionally, engineering and behavioural science have been siloed but when we’re talking about AI systems that are supposed to support decision-making, these two worlds need to come together.
- For engineering and BeSci to collaborate, business need to see the value of domain expertise in these fields separately and the impact of bringing them together. Bringing together Data Intelligence and Business Intelligence is not enough—we need to integrate Human Intelligence too.
- There’s also an opportunity in how we think about adoption: engineers focus on getting the system to work technically, but what happens when it’s time for real people in organisations to actually use these tools effectively? This is where the behavioural science of change management and trust-building could play a vital role.
As an outsider, it's difficult to fully appreciate the challenges of implementing generative AI within companies. AI projects are expensive and complex due to fragmented data ecosystems and proprietary formats. Most tools aren’t plug-and-play—instead, they require extensive planning and integration across multiple departments. Even after overcoming the technical challenges, the real impact materialises only when these systems are used effectively.
What I find mildly shocking is that millions are being invested into developing these systems without a behavioural-scientist-in-the-loop. It’s like solving 90% of the problem but missing the critical last mile: ensuring these tools align with how humans actually think and make decisions. Without that human touch, even the most technically brilliant systems risk falling short of their potential.
So here's my question: Is there already a discipline that integrates behavioural science, decision science, and data intelligence into the design of business systems? Or is this an emerging space that hasn’t yet been formally defined? I’m eager to learn more—where should I be looking?
#dataintelligence #businessintelligence #genai #behavioralscience #DAIWT #databricksworldtour
Digital Health and Transformation
1moElina Halonen really interesting and always find your posts insightful. Assume because it's engineer lead and still relatively immature tech the complexity of the user/human need and how this affects usage and outcomes has not been considered in any way near the extent to which it should, This is obviously the case for many large/enterprise software but we're talking another level here. In my experience, to get value out of an LLM like Chat GPT requires understanding it's 'mind' and behaviour and why prompt engineering is a thing rather than the other way round - basically it's not intuitive beyond perhaps the UI. My field is digital health, my academic interest is behavioural science - effective adoption of AI will be one of if not the biggest challenges in healthcare (and ideally a personal opportunity!) Would be interested in any further thinking you have in this area.
Treasury Executive | Experienced as Client and Supplier | Strategic Thinker | Implementer | Consultant | Trainer | Entrepreneur | (Procurement, Technology, Behavioural, Organisational & Data Science Expertise)
1moFirst, to answer your question. As Hannah Lewis, says it's a new space. What she doesn't say is that it's more a black hole than a space! Consider the following: 1. Increased automation: People build technology to gain efficiencies, which is to do things cheaper, faster and, by cutting humans out of the loop as much as possible, fewer errors. What does it do: By leaving humans to consider the data and insights provided, it leads to data overload, cognitive and, from that, decision fatigue. People make worse decisions. 2. UI & UX: Improved UI and UX accelerates the above by helping people to understand and react faster. 3. Reduction in workforces: More automation leads to fewer people needing to be employed. I don't think I need to explain this one. 4. Efficiency savings: Management has two choices - use the savings to invest in more efficiency, doing what they do now but faster and cheaper, or be more effective, doing new things that deliver more value. More people - and therefore more budget-owning managers - are risk averse than otherwise. Which is the lower risk option? Doing the same thing. Therefore, a vicious cycle occurs - more automation resulting in worse decisions and a greater reduction in workforce. [TBC]
Human Experience Enthusiast | Modern Ways of Working Strategist | Research & Insights Professional | Formerly at VMware
1mo💯agree! As I continue to dive into AI, I have had very similar thoughts. And this is certainly reflected in job postings in the space.
Hi Elina Halonen I am glad to see a such a renowed behavioral scientist looking into this matter. I have been working with marketing data/research for a long time and have similar concerns to what you exposed here. I am planning my PhD project into experimental study on behavioral design applied to BI systems specifically focused on corporate management decision making. Please keep sharing as you learn along, it will be great to accompany here.