Leading the Way: Why and how Your Company Should Adopt AI Responsibly

Leading the Way: Why and how Your Company Should Adopt AI Responsibly

AI is revolutionising the way we live and work. In a previous blog post, I highlighted the crucial ingredient of trust in enabling companies to fully harness AI’s potential. A recent Salesforce study reveals that public perception of AI still has significant room for improvement. People expect the government to establish a regulatory framework that ensures ethical AI development and deployment. Others believe that companies have a responsibility to create and use reliable AI solutions and provide guardrails.

The research, conducted in Belgium, the Netherlands and Sweden, surveyed a total of 3,226 people with respondents evenly distributed across the three regions. Nearly half of the Belgian respondents say they’re looking to the government to provide more direction and structure and to take responsibility to protect their data.

In contrast, participants from the Netherlands and Sweden believe that companies developing AI products should take responsibility for data protection. Only 23% of Swedish respondents think that the government needs to play a role in this important challenge.

EU AI Act can be a game changer

Looking closer at these results, we can conclude that the upcoming EU AI Act will be a valuable asset in advancing responsible and safe AI. As a result, public trust in AI should also increase. The Act provides a legal framework that many people are waiting for. It also contains comprehensive guidelines for AI developers and companies to ensure ethical AI development and deployment.

In a rapidly changing world, people seek guardrails and standards to protect them from data misuse, misinformation and other AI-related risks. Both public and private entities need an actionable framework for responsible AI development and adoption.

However, companies should not wait for the government to establish and enforce rules. If we are to learn from the issues around trust that have arisen alongside the emergence of social media platforms over the past decade, they should take the initiative and proactively address these concerns.

At Salesforce, we have always taken our responsibility to develop trusted AI seriously. We have published guidelines for AI and actively advocate for guardrails in line with the EU AI Act. For example, we oppose a one-size-fits-all approach. Instead, we support risk-based AI regulations that account for different contexts. We also promote transparency, aiming for users to understand how AI systems might impact them.

Despite their concerns, people see AI’s potential

Let’s look at some other interesting conclusions from the study. The survey indicates that there is still room for improvement in the  public perception of AI. More than a third of Belgian and Swedish respondents have a positive view of AI solutions. Dutch participants are more hesitant, with only 27% believing in the positive impact of this technology.

Nearly half of respondents in each region express distrust in AI, mostly because of a lack of understanding. According to the survey, most people associate AI with voice assistants (e.g. Siri), chatbots and ChatGPT. Only a minority realizes the benefits of everyday deployment by enterprise AI, such as making purchase recommendations on websites or detecting and combating fraud.

People particularly fear that AI will take away human control in decision-making, reduce human interaction, and eventually replace jobs. The survey reveals that 53% of Belgian and 46% of Dutch respondents are concerned about potential job loss due to AI.

Despite these concerns, there are positive signs that attitudes toward AI will improve. Most respondents agree that AI offers significant opportunities, both for their country’s economy and for their work. They recognise that AI can enhance efficiency and solve problems beyond our human capabilities.

Tips for adopting AI in your company

Governments, developers, companies, and even users – all of us have a role in creating a trusted foundation for AI. As a company, you can certainly take the lead. A previous global Salesforce survey among C-suite executives revealed that two in three believe that trust in AI drives revenue, competitiveness and customer success. Here are some ideas to consider when adopting AI in your company:

  • Create a clear AI ethics framework: engage a wide range of stakeholders in your business, including employees with different backgrounds, to develop this framework
  • Establish an AI ethics committee: this group should be responsible for developing ethical principles and overseeing compliance with these  guidelines
  • Choose or develop technologies that promote transparency and explainability: communicate with your customers and users about how these technologies work and how they impact them
  • Provide training for your staff: encourage a culture of responsible AI use, reducing risk and increasing organisational accountability

Share your thoughts about the figures from our survey, or contact me if you want to exchange ideas on creating a positive culture around AI.

Falke Van Onacker

VP WW Sales @ VaultSpeed | deliver new data products in <2 sprints | 50x your Data Pipelines across any platform

4mo

Very helpful Bob, thanks for sharing!

Like
Reply
milton glen

Search Engine Optimization Analyst at SEO

4mo

Adopting AI responsibly is crucial for long-term success and trustworthiness. AI enables efficiency and innovation, but ensuring fairness, transparency, and accountability is essential. Start by implementing ethical guidelines, conducting regular audits, and using reliable AI platforms like SmythOS, which offer robust governance features. Responsible AI adoption will enhance your company's reputation and drive sustainable growth

Like
Reply
Sjourd Wijdeveld

Experienced Tech Executive (CDO / CIO / CTO) - Digital & IT Transformation, Innovation, Cyber Resilience, Energy Transition & Renewable Energy

4mo

Hi Bob Vanstraelen , fully agree we need to take responsibility regarding the use of AI. You are forgetting one important dimension of this responsibility: sustainability. The development, testing and use of AI models consumer enormous amounts of energy. On top of that, the datacenters supporting the AI models also need enormous amounts of water for cooling. We as consumers of AI should be very aware of the impact of using AI tools on our environment and make a trade-off between the benefits and sustainability impact.

To view or add a comment, sign in

More articles by Bob Vanstraelen

Insights from the community

Others also viewed

Explore topics