Kiteworks Proactively Protects Confidential IP and Private Data From Exposure in Generative AI LLMs
Generative AI LLMs present significant risk when organizations fail to apply next-generation digital rights management.

Kiteworks Proactively Protects Confidential IP and Private Data From Exposure in Generative AI LLMs

The technology landscape will never be the same due to the rise of artificial intelligence (AI). It’s transforming lives and businesses and will have far-reaching impact that we will see play out for many years to come. Huge opportunities exist. But corresponding challenges do at the same time.

In today’s fast-paced digital environment, organizations face the daunting task of managing and securing their intellectual property (IP) and personally identifiable information (PII) belonging to employees, customers, and various third parties. As if the challenge wasn’t already daunting enough, the rise of generative AI large language models (LLMs) makes it even more difficult. Users can inadvertently ingest and expose sensitive content into these LLM tools.

Data Privacy and Compliance Risks of Generative AI LLMs

Enterprises recognize the privacy and compliance risks. A recent survey by Gartner found that businesses rank generative AI as their second-highest risk. A recent Kiteworks press release pinpointed three primary generative AI LLM risk aspects: 

1.     IP being utilized in the training process and subsequently appearing in outputs for other users

2.     Violations of data privacy laws due to the use of PII and other sensitive personal information in these AI tools

3.     The potential for malevolent entities to harness AI LLM tools to expedite the creation of sophisticated cyber threats

Obviously, there are numerous types of sensitive content that could be at risk when it comes to generative AI LLMs. The following are the most prominent ones:

1.     Training data—that used to train the AI language model

2.     Knowledgebase data—confidential, proprietary information used to generate responses from the generative AI LLM tool

3.     Confidential chatbot interactions in customer support, sales, and marketing scenarios where PII and other personal data information are entered, both intentionally and inadvertently, into the chat interface

This can create significant concern for organizations, to the point that numerous enterprises have banned the use of generative AI at work. One recent study found that as many as three-quarters of organizations are mulling over bans on AI tools like ChatGPT and similar LLMs in their workplaces, with 61% considering long-term or permanent bans. Bans likely will prove difficult to enforce and moreover organizations failing to leverage generative AI will find themselves at a competitive disadvantage.

Generative AI LLMs can present significant risk to organizations that lack the robust digital rights management to control and track what information is ingested into them.

How Generative AI LLMs Put Businesses at Risk

Research shows there are different ways in which generative AI LLMs can put businesses at risk. (Get the sources for the below data points in our recent press release.) At the forefront are employees, who simply do the wrong thing—either intentionally or inadvertently. 15% of them admitted to regularly posting company data into generative AI LLMs, with one-quarter of the data being categorized as sensitive. Workers using generative AI LLMs used them frequently, an average of 36 times daily.

Top categories of confidential information being inputted into generative AI LLMs include internal business data, followed by source code, PII, and customer data. One of the biggest risk areas are the plugins and applications sitting on top of the generative AI LLMs: Over 30,000 GPT-related projects are listed on GitHub today, and their average security is subpar compared to other applications. This should be a huge red flag for organizations seeking to gain a competitive advantage using generative AI LLMs. Finally, organizations are behind in implementing processes to mitigate regulatory compliance risks, particularly the protection of PII data.

And use of generative AI LLMs continues to skyrocket. For example, one study found that the amount of sensitive data being ingested into generative AI LLMs shot up 60% in a matter of six weeks. Conducted a few months ago, one can only imagine what this number looks like today.

Cybercriminals See the Opportunity of Generative AI LLMs

Organizations face serious risks when sensitive content is leaked into generative AI language models. Cybercriminals can manipulate these models for malicious purposes, like ransomware attacks to extract training data. This exposes companies to potential loss of personally identifiable information (PII), IP, and protected health information (PHI). Such breaches violate regulations like GDPR and HIPAA, resulting in fines, reputational damage, decreased productivity, and lost revenue.

With growing adoption of generative AI across employees and contractors, organizations urgently need data protection policies. Most companies exchange sensitive information with countless third parties, multiplying the risks. Robust content controls are essential to secure generative AI and avoid inadvertent data leaks. Proactive measures now can prevent regulatory penalties, brand harm, and financial impacts down the road. Data protection must be a top priority as generative AI usage surges.

Learn how Kiteworks unifies, tracks, controls, and secures sensitive content communications across all your communication channels.

How Kiteworks Tackles Generative AI LLM Data Privacy and Compliance Challenges

Organizations using Kiteworks-enabled Private Content Networks can securely manage confidential information like trade secrets, customer data, PII, PHI, and financials when using generative AI LLMs. Kiteworks delivers capabilities based on three content risk levels:

1.      Low Risk: Least-privilege access policies and controls restrict employee and third-party access to content. Watermarking alerts users not to use certain content in generative AI.

2.      Moderate Risk: Kiteworks SafeView™ prevents local copies and extraction of content, stopping uploads to generative AI. Policies can limit access time periods and number of views.

3.      High Risk: Kiteworks SafeEdit™ uses next-generation DRM to block data from leaving repositories, while enabling collaboration. It streams editable video to users.

Just as Kiteworks DRM controls content ingestion into generative AI, it also governs how end-users access and utilize confidential data, including limits on what can be fed into generative AI. With Kiteworks, organizations using generative AI can have confidence their data is protected. Those banning its use can reassess, knowing their sensitive information is secured.

 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics