Unmasking the impact of shadow AI -- and what businesses can do about it

The AI era is here -- and businesses are starting to capitalize. Britain’s AI market alone is already worth over £21 billion and expected to add £1 trillion of value to the UK economy by 2035. However, the threat of “shadow AI” -- unauthorized AI initiatives within a company -- looms large.

Its predecessor -- “shadow IT” -- has been well understood (albeit not always well managed) for a while now. Employees using personal devices and tools like Dropbox, without the supervision of IT teams, can increase an organization’s attack surface -- without execs or the C-suite ever knowing. Examples of shadow AI include customer service teams deploying chatbots without informing the IT department, unauthorized data analysis, and unsanctioned workflow automation tools (for tasks like document processing or email filtering).

It’s no wonder that two-thirds (64 percent) of CEOs worry about cybersecurity risks associated with AI. If handled poorly, shadow AI could seep across a business and into customer experience, resulting in problems like sensitive AI chat data being leaked to the wider organization. As well as financial and reputational damage, shadow AI can cause compliance issues, like Snap’s brush with the ICO last year.

Shadow AI is not only dangerous; it’s also inefficient and risks stifling innovation. This is because untrained employees using shadow AI lack awareness of how the AI models they’re using operate and oversight from company executives, hindering value for businesses. Operating in the shadows eliminates AI best practices, stunting growth and preventing businesses from maximizing the power of AI.

Generative AI is already transforming workplaces and boosting productivity, with 71 percent of employees already having used generative AI at work. As AI becomes more accessible with the likes of ChatGPT and Google Bard, we must learn the lessons of the past to make the best future. To strike the right balance between harnessing AI’s benefits and limiting the risks of shadow AI, business leaders should consider the following steps.

1. Establish a clear AI strategy from the beginning

For 20 years, we have seen businesses embarking on digital transformation, whether in the form of cloud migration, creating data pipelines, or implementing agile approaches to software development. Today, many companies are taking a similar approach to AI. However, only those that put in place long-term strategies will succeed. Digital transformation project failure is rife, and history risks repeating itself if leaders aren’t careful.

Without a long-term strategy that integrates AI into a business's core operations, leaders will struggle to harness AI’s immediate advantages while mitigating risks like shadow AI. This strategy should outline the goals, priorities, and envisioned benefits of AI adoption across the organization.

Moreover, the fundamentals of any AI implementation strategy must be communicated to all key stakeholders. Typically, these range from senior management and legal personnel to IT teams and data scientists -- anyone who’ll be involved in AI decision-making governance.

2. Create sensible technical governance frameworks

Blanket restrictions on tools like ChatGPT are not the most efficient way to prevent shadow AI, just as you wouldn’t tackle phishing scams by disabling all your company’s emails. Instead, what’s needed is constant, explainable oversight of what AI tools like chatbots are doing -- and why they are doing it -- enabled by a robust and extensive governance framework.

This could include a wide range of measures, such as automated monitoring of large language models (LLM) content for inappropriate, confidential, or biased information. Such frameworks can also prevent rogue content through custom policies, such as keyword blocking. This helps ensure ongoing legal and ethical compliance, significantly lessening the risk of subsequent harms like data breaches.

Also, it’s never too late to get started. Even if shadow AI usage is already underway, frameworks can minimize future risks because employees are more likely to follow established rules and boundaries if they understand the reasons behind them.

3. Guarantee full connectivity across the business

AI models should be connected to each other, to business processes, and to the people making decisions at every level to mitigate the risks of shadow AI. Far too many organizations (and, by extension, their leaders) deploy AI systems in isolation. Short-term solutions are not only less likely to be effective across different business functions but also can’t provide context to decisions. When leaders know models exist, it’s far easier for them to control them and ensure they remain safe.

One of the ways to ensure full business connectivity is to deploy computational twins, which provide system-wide rules, enabling continuous auditing and explainability. This is because, unlike digital twins -- which capture only a single object or system -- computational twins capture, measure, and play back the whole picture of an organization’s operations.

Computational twins show precisely what’s happening inside a business at all times, bringing a host of benefits and streamlining operations and processes. For example, they can scan for and identify risks like revenue shortfalls or staff shortages well before they would otherwise present themselves by projecting likely future scenarios.

4. Prioritize communication

Lastly, business leaders must prioritize communicating with employees at all levels about the importance of AI governance and the risks associated with shadow AI. This could include providing training programs and resources to boost understanding of AI policies and guidelines -- after all, knowledge is power. Without every individual employee feeling well-equipped to spot potential risks, leaders will struggle to effectively harbor a culture of understanding around the impact of shadow AI and how to tackle it.

To effectively minimize shadow AI and reap the technology’s rewards, business leaders and decision-makers must keep human-centric, connected, and robustly governed systems at the forefront of their approach. They must implement a clear AI strategy that has communication at its core from the start.

Those who fail to do so risk reputational, financial, and operational damage from shadow AI. The genie might be long out of the bottle, but leaders who heed these lessons will enjoy countless benefits if they make the right choices today.

Dr Marc Warner is CEO & co-founder, Faculty

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.

  翻译: