Navigating the AI Revolution: Building Trust in a World of Advanced Algorithms

Navigating the AI Revolution: Building Trust in a World of Advanced Algorithms

Generative AI is revolutionizing the way we live and work. Most organizations and industries are now experimenting to see how they can use this technology to their advantage. One thing is certain: AI will never be successful without trust.

Recently, at the Gartner Symposium in Barcelona, I had the unique opportunity to engage with a diverse group of CIOs and participate in a Roundtable on AI. This experience underscored the pivotal role of trust in the deployment and scaling of AI technologies. In discussions with these technology leaders, the recurring theme was the need for a responsible approach to AI, particularly in terms of data ethics, privacy, and control. Engaging with leaders who are at the forefront of integrating AI into their strategies provided real-world insights into how trust forms the backbone of successful AI implementation.

Policies and standards

It is encouraging to see that the European government is working on legislation to ensure trustworthy AI. In the EU, the use of AI will be regulated by the AI Act. While artificial intelligence has great potential to improve our healthcare system, ensure more efficient manufacturing, and lead to more sustainable energy, we should never forget that also AI is not without risks. Businesses are eager for guardrails and guidance, and they are looking at the government to provide the right standards and policies.

End users need to understand what AI recommends, especially when that knowledge is used to make AI-driven decisions. Policymakers could therefore create risk-based frameworks, push for commitments to ethical AI design, and convene multi-stakeholder groups. As we continue to ask more of AI, we must also ask more of each other to responsibly harness the power of this technology. There are numerous ways that businesses and governments can increase trust in AI.

Protecting our data

AI is all about data, so we need comprehensive privacy legislation to protect the data of our customers and users. By separating the data from the Large Language Model (LLM), organizations can be confident that their data is being protected from access via third parties without consent. When data is accessed by a LLM, we must keep it safe through methods like secure data retrieval, dynamic grounding, data masking, toxicity detection, and zero retention. And if we collect data to train AI models, it is important to respect data provenance and ensure that companies have consent to use that data.


Do not just focus on the models

There can be no one-size-fits-all approach to regulation, because that would hinder innovation, disrupt healthy competition, and delay adoption of the technology. Regulation should differentiate between the context, control, and uses of AI, and then provide guardrails accordingly. Whereas generative AI developers should be accountable for the way models are trained and the data they are trained on, those deploying the technology need to establish rules governing interaction with the model.

Finally, bigger is not always better. Smaller models can offer high quality responses and be better for the planet. Governments should therefore incentivize carbon footprint transparency and help scientists advance carbon efficiency for AI.

Trust in AI is as important as functionality. Organizations need AI tools that are available, fault-tolerant, secure, and sustainable. Ultimately, this is how they will build trust within their business and with their customers.


Any ideas on trust and the future of AI? Feel free to share your thoughts!

Thks for sharing Bob! Glad you enjoyed your visit to Barcelona though I missed you this time !

Claus Eilskov Schwanenflugel Hansen

Changing the world through technology - North Area Vice President at Salesforce

1y

Bob Vanstraelen Thanks for sharing, super exiting. Yes - Trust is fundamental VALUE for the human race to utilise the the enormous power AI Advanced Algorithms, can bring to companies, organisations, public sector, used right i can solve the pains and issues we are all struggling with.

Like
Reply
Sjourd Wijdeveld

Experienced Tech Executive (CDO / CIO / CTO) - Digital & IT Transformation, Innovation, Cyber Resilience, Energy Transition & Renewable Energy

1y

Hi Bob Vanstraelen , thx for sharing. GenAI provides great (and sometimes mindblowing) opportunities but we need to gain trust with using it. Transparency over models and data is key to gain that trust and secure we keep understanding how our AI models work.

Ryan Fedell

CEO of East Park | Salesforce Partner & Consultant | SMB Advocate | Ps. It’s a great day to buy Salesforce

1y

Thanks for sharing Bob Vanstraelen. I think trusting the people developing AI is the only way to trusting AI.

Like
Reply

To view or add a comment, sign in

More articles by Bob Vanstraelen

Insights from the community

Others also viewed

Explore topics