Americas

Asia

Oceania

Shweta Sharma
Senior Writer

Cisco’s homegrown AI to help enterprises navigate AI adoption

News
15 Jan 20254 mins
SecuritySecurity Software

Cisco AI Defense is the company’s proprietary AI tool that can validate enterprise AI models and protect them against attacks.

Cisco
Credit: Ken Wolter / Shutterstock

As the world rushes to integrate AI into all aspects of enterprise applications, there’s a pressing need to secure data-absorbing AI systems from malicious interferences.

To achieve that, Cisco has announced Cisco AI Defense, a solution designed to address the risks introduced by the development, deployment, and usage of AI.

According to Tom Gillis, SVP and GM of Cisco Security, the rapid integration of AI into business workflows, which should warrant “multi-year refactoring” of applications to include AI features, is progressing faster than security teams can keep up, creating numerous vulnerabilities for attackers to exploit.

“As this transition unfolds, we observe a few key trends,” Gillis said. “The adoption leads to the bifurcation of emerging toolsets, offering developers a vast array of rapidly evolving options. Consequently, development teams move swiftly, while security teams, tasked with establishing boundaries around the developers’ work, struggle to keep up and often lose track of it.”

Among other things, Gillis pointed out, Cisco AI defense will address this key issue of “Discovery” by providing an inventory of all AI workloads, applications, models, data, and user access across distributed cloud environments.

Cisco’s AI Defense will integrate into its existing network visibility infrastructure consisting of firewalls, web proxies, and secure access gateways, to scan network traffic and identify all existing AI workflows.

Proprietary AI for model validation

The second problem the new offering is trying to address is the need to shift security practices as AI-infused systems become mainstream.

“The thing about AI is that it’s just architecturally different,” Gillis noted. “In a traditional application, you had three layers: the presentation layer (web layer), the application logic, and the data persistence layer. Data resided in the persistence layer, which, by definition, is persistent, while the middle layer did not retain any data.”

With AI, he added, a model is placed in the middle, and data is absorbed into this model. “The model retains and transforms the data, creating an entirely new layer in the stack that requires careful consideration and protection.”

To tackle this challenge, Cisco AI Defense will offer a new detection capability powered by its proprietary AI. It will perform “model validation” through exhaustive testing of model logic to identify any signs of compromise or poisoning.

“We want to ensure that the data used for training is accurate and valid, with no malicious additions to the datasets,” Gillis explained. “Additionally, we need to verify that the guardrails implemented in the model are functioning correctly and that the model is behaving as expected.”

Protection at runtime

While Cisco AI Defense allows for defining guardrails for AI models, the proprietary technology will also enable security teams to implement these protections independently, without interfering with the developers’ control over the models.

“We dynamically calibrate and set guardrails for models before and during production,” Gillis said. “In production, a monitoring system observes normal application behavior and detects abnormalities, such as prompt injection attacks, by flagging actions outside expected patterns.”

This runtime protection, Gillis emphasized, is independent and transparent to the AI model, and lives entirely in the “network.”

Gillis noted that most competing AI safety tools primarily focus on monitoring data exchange and performing data loss prevention (DLP), with their discovery phase generally limited to the straightforward identification of existing AI elements.

“The key difference with Cisco AI Defense lies in understanding the application,” he said. “Unlike other AI safety tools, we conduct model validation and have the capability to enforce protections, such as preventing prompt injection attacks at runtime. Our proprietary models uniquely track application behavior and monitor for any drift.”

Other than prompt injection, the solution is targeted at protecting against data and model poisoning attacks. It will be generally available by the end of February through the Cisco Security Cloud.

  翻译: