2. Cybersecurity in the AI World
Guardians of AI, by Richard Diver

2. Cybersecurity in the AI World

Will AI cause more headaches, or will it solve scenarios cybersecurity issues? Most likely both. From the attacker perspective it is a new tool they can use to accelerate their attack development and improve quality, while the defender can adopt this technology to improve skilling, knowledge transfer, and create clarity in a massively complex cybersecurity landscape.

Zero Trust for AI ?

The first issue that arises when assessing the potential benefits of generative AI is the concern about what information and actions the AI system will have access to. This needs to be assessed for each AI implementation, and one of the best approaches to use is the approach of Zero Trust principles - with considerations for AI:

  • Verify explicitly: Built-in safety systems should ensure that AI interactions are tied to user identity. Any autonomous actions given to the AI system, via its own credentials and systems access, must be verified continuously.
  • Least privilege: No AI system should have unlimited access to data. All data must maintain the original classification and security controls, and access is granted based on the user privileges. If the data exists and the AI can get to it unrestricted, an attacker has the potential to manipulate the system to divulge the information.
  • Assume breach: assume the AI system will get something wrong, it will make a mistake, and it can be manipulated. With this in mind, how can you detect these errors, mitigate the damage, and respond appropriately when it occurs?


Developer security

The other key element that comes into the conversation about generative AI is the need to focus on developer security. Large language models (LLM), small language models (SLM), and future AI models, are developed and trained by experts that focus on that task alone. When they host the AI models for access by others, it then becomes the engine that powers AI integrated applications. Layers of defense can be put into the AI model and AI platform, but it is in the AI application layer that we gain the ability to control the behavior, safety, and security of our data and interactions with AI.


A diagram showing developer security framework, including design and code, build, deploy, and run
Code-Build-Deploy-Run


This image is part of work developed over the last 2 years, an ongoing project to unite the IT and developer professionals, with security experts - you can read about the image development effort here. This week, at the Microsoft Build conference (May 21-23) we are going to see a lot more conversation about software development security, 11 sessions are dedicated to the topic due to the increased focus on AI safety and security in the application layer. You can register now and watch live or on-demand after the event.

AI supply chain security

Extending the focus on the software security, we must also consider the broader AI supply chain: a combination of the existing software supply chain along with data providers, skills and plugins, and the potential of AI working directly to another AI. Each interaction between humans and AI, or AI and AI, has the potential for a compromise in the weakest-link of the chain. Understanding the inheritance of trust, verification, and dependencies, is crucial to the overall safety of any AI system and the data use within.

Other cybersecurity issues to consider, where AI will have a specific impact, includes identity and access management, insider risk, business email compromise, and data protection. If we think about how we protect email from phishing attacks, we can lay it out like this:


A diagram of the defenses used to protect email messages and attachments
Email security flow

In this flow, we expect all inbound email content might contain malicious content such as social engineering, malware attachments, or phishing links. The primary defense is in the layers of protections that apply at the mail server: scanning content, testing the links, rejecting any spam and identified attacks. The secondary defense occurs in the email client where additional security can apply on the device, in the software, or in the user interface to warn the user that their content may contain external and risky information.

With generative AI, we will go into this in more detail, but for now we can reuse the three-layer approach to understand inbound and outbound flow of user requests to the LLM, and from the LLM to the user.


Image of the defense flow for an AI system
Generative AI security flow


Many of the safety system are built into the AI platform layer, and new capabilities are being offered in the AI application layer each month. You can learn more about AI Platform security solutions in Azure AI Content Safety. For the AI application layer, the first step is to learn about prompt engineering and the development of a strong system metaprompt.

Here is my favorite quote from this chapter:


Quote by Richard Diver "AI can change the threat of insider risk for better or worse due to the speed of reconnaissance, volume of data, and context driven discovery"
Quote by Richard Diver


The book is available now on Amazon - Guardians of AI: Building innovation with safety and security.

In the next newsletter we will explore some of the key insights from Chapter 3: Types of AI System.

To view or add a comment, sign in

Explore topics