Have you consider this when using AI and Data Science to make decisions critical to your business and to your organization?
Data Science is one of the most talked about topics in the professional world. It is the second fastest growing job nowadays together with Data Analyst (AI and Machine Learning Specialist is number one) and annual salaries in the USA range between $77K and $197K USD. That suggests Data Scientists are adding value to businesses. But let’s be honest and ask ourselves how much do we, as decision-makers, understand about the ways in which Data Science can help us make better decisions that improve our business?
The general appreciation for Data Science is rather technical. The hard skills required are programming, statistics and mathematics, data manipulation, and machine learning. The soft skill is complex problem-solving. Those skills seem to have a rather technical tone than a scientific one and, as such, most people building their career as data scientists are former programmers.
Decision-makers want to use Data Science and AI for activities such as predictive analysis, scenario analysis, for quantifying and tracking KPIs, and to gather decision suggestions. They use all that information to figure out a set of decisions that can be run through decision simulators and observe the results. All that information helps decision-makers either decide on one of those decisions or figure new ones and run them through simulations again. Their expectation is that the resulting highly-analyzed decision will provide better business results and outcomes.
Decision-makers also expect Data Science and AI to be applied for customer acquisition-activation-retention-segmentation-personalization, for product development and innovation, to improve operational efficiency and risk management, and other activities.
Not using Data Science correctly increases the risk of financial losses due to increased operational costs, wrongful compliance implementation, development failures, customer loss, and misinformed or misleading decision-making.
The question then is: what might we be missing, that shouldn’t, in order to maximize the chances of good and well-informed decision-making?
Recommended by LinkedIn
To answer that questions we might want to start with another question: are former programmers and technologists the only type of data scientists we need? The answer is ‘No’. Let’s consider that the vast majority of applications end up being used by people at some point or another, in different ways, and for different purposes. We wat those solutions, services, and applications to add value to them, and to that effect we sometimes add designers to the team to increase the attractiveness of the product or service, and add UX experts enhance the user experience. However, that isn’t enough when it comes to using an AI as an additional expert because AI isn't advanced enough yet to be reliable.
For critical projects where people are impacted, decision-makers should seriously consider adding one behavioral psychologist and one sociologist to their team. Those two roles don’t need to be expert technologist. A behavioral psychologist’s value is in understanding the customer, the outputs from the AI and the solution being developed from the behavioral perspective to then determine if the AI is being used correctly. The sociologist brings a more contextual appreciation to take into account cultural aspects. The two roles then become the compass that sets the team and the AI on the right direction to follow. The behavioral psychologist and the sociologist need to be well-versed on the current state of AI, its pros and cons, ethical implications behind AI, and the risks AI could bring to people inside the organization and to customers.
Without the contributions of the behavioral psychologist and the sociologist it is easier for an AI to hallucinate and to provide biased information, or behave with biases or unethically. The outcomes could be loss of trust from customers, wrong medical diagnosis, invasion of privacy, and other. That can damage the relationship between the business and its customers, provide inadequate compliance, unreliable technical solutions, and other, to the point that the organization goes belly-up and has to shutdown. What you thought would bring you ahead of the competition could become a fast track to your business demise.
Responsible AI is the practice of designing, developing, and deploying artificial intelligence systems in a way that prioritizes ethical, transparent, and accountable use. It is a relatively new approach to applying Data Science and AI, yet it misses recommending to add those roles because its perspective is also technological. I invite you to seriously consider adding a behavioral psychologist and a sociologists to your team.