Executives are aggressively pressing for all manner of gen AI deployments and experimentation despite knowing the risks — and CISOs are left holding the risk management bag. Credit: GaudiLab / Shutterstock Senior executive perceptions on the promise of generative AI are proving to be a siren song for many CISOs. According to a recent survey from NTT Data, 89% of C-suite executives “are very concerned about the potential security risks associated with gen AI deployments.” But, the report found, those same senior execs believe “the promise and ROI of gen AI outweigh the risks” — a situation that can leave the CISO as the lone voice of risk management reason. And it may be taking its toll, as almost half of enterprise CISOs “hold negative sentiments” about generative AI, feeling “pressured, threatened, and overwhelmed,” according to the survey. The conflict is quite familiar. Senior executives pressure line-of-business chiefs to embrace a new technology to leverage efficiencies and boost the bottom line. But generative AI is risky business — arguably more risky than any technology to date. It hallucinates, overrides guardrails, jeopardizes compliance, and gobbles up sensitive enterprise data. And it’s being embraced by enterprises quickly without proper security hardening, while being pushed by vendors who highlight functionality over security. “It’s the wild west, with lots of AI applications and large language model choices making it tough to vet what’s secure. There are also applications that are masked to look legitimate but are schemes to exfiltrate data and facilitate ransomware,” said Will Townsend, VP and principal analyst at Moor Insights & Strategy. “The concern around data leakage is real. We are already seeing that happen along with prompt injection attacks introducing malicious code.” One of the most problematic gen AI issues CISOs face is how casual many gen AI vendors are being when selecting the data used to train their models, Townsend said. “That creates a security risk for the organization.” Veteran security leader Jim Routh, who has held CISO-level roles at Mass Mutual, CVS, Aetna, KPMG, American Express, and JP Morgan Chase, said generative AI’s penetration into SaaS solutions makes this more problematic. “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” said Routh, who today serves as chief trust officer at security vendor Saviynt. “Software engineers have more than 1 million open source LLMs at their disposal on HuggingFace.com.” Robert Taylor, an attorney who specializes in AI and cybersecurity legal strategies and serves Of Counsel with Carstens, Allen & Gourley, an intellectual property law firm based in Dallas, said he sees a common theme at all levels within organizations of every size. “They don’t know what they don’t know. At least CISOs are already primed to think of risks and have an idea of the security risks posed by AI,” Taylor said. “But AI comes with a lot of new security risks that they are trying to get their arms around. There are projects in the works trying to assess the many types of security risks that arise with gen AI. I’ve heard categories of security risk numbering into the hundreds to well more than a thousand types of security risks.” All this can take a psychological toll on CISOs, Townsend surmised. “When they feel overwhelmed, they shut down,” he said. “They do what they feel they can, and they will ignore what they feel that they can’t control.” An accelerating issue Meanwhile, as senior execs push forward, leaving their CISOs overwhelmed by the risks, attackers are moving rapidly. “The bad actors are feverishly working to exploit these new technologies in malicious ways, so the CISOs are right to be concerned about how these new gen AI solutions and systems can be exploited,” Taylor said. “Gen AI solutions are not traditional software and services and have some vulnerabilities that other technologies don’t have to deal with.” Worse, Taylor argued, gen AI risks are more amorphous and shifting compared to traditional technology risks. “Gen AI continues to morph post deployment, which creates further opportunities for security risks such as prompt injection, data poisoning, and extracting confidential information or PII” from the information shared with the gen AI, Taylor said. Jeff Pollard, VP and principal analyst at Forrester, pointed out that prompt security, in particular, “is immediate and necessary if the organization has customer-facing — or employee-facing but customer-impacting — prompts that could lead to unauthorized data access or disclosure.” And the problem, Pollard said, is going to get a lot worse — quickly. “It’s important to learn how to secure these now because this is the first version of generative AI that might see widespread enterprise deployment. Agentic AI is coming soon — if it is not already here and that will require these controls and more to secure correctly.” SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe