Orca Security’s analysis of major cloud infrastructure reveals widespread use of tools with known vulnerabilities, exposed AI models and data, misconfigured systems, and unencrypted data — all to capitalize quickly on AI. Credit: wee dezign / Shutterstock Security analysis of assets hosted on major cloud providers’ infrastructure shows that many companies are opening security holes in a rush to build and deploy AI applications. Common findings include use of default and potentially insecure settings for AI-related services, deploying vulnerable AI packages, and not following security hardening guidelines. The analysis, performed by researchers at Orca Security, involved scanning workloads and configuration data for billions of assets hosted on AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud between January and August. Among the researchers’ findings: exposed API access keys, exposed AI models and training data, overprivileged access roles and users, misconfigurations, lack of encryption of data at rest and in transit, tools with known vulnerabilities, and more. “The speed of AI development continues to accelerate, with AI innovations introducing features that promote ease of use over security considerations,” Orca’s researchers wrote in their 2024 State of AI Security report. “Resource misconfigurations often accompany the rollout of a new service. Users overlook properly configuring settings related to roles, buckets, users, and other assets, which introduce significant risks to the environment.” AI tool adoption: Quick, widespread, and a little sloppy According to Orca’s analysis, more than half of organizations (56%) whose cloud assets were tested had adopted AI models to build applications for specific use cases. This often means using a cloud service that provides access to AI models, deploying models locally along with training data, deploying associated storage buckets, or using specific machine learning tools. The most popular service, Azure OpenAI, was used by 39% of organizations with a footprint on Microsoft Azure. Of the organizations who use AWS, 29% had deployed Amazon SageMaker and 11% Amazon Bedrock. For Google Cloud, 24% of organizations had opted into Google Vertex AI. In terms of model popularity, GPT-35 was used by 79% of organizations who adopted AI, followed by Ada (60%), GPT-4o (56%), GPT-4 (55%), DALL-E (40%), and WHISPER (14%). Other models, such as CURIE, LLAMA, and Davinci, all had usage rates under 10%. Popular packages used to automate the creation, training, and deployment of AI models include Python scikit-learn, Natural Language Toolkit (NLTK), PyTorch, TensorFlow, Transformers, LangChain, CUDA, Keras, PyTorch Lighting, and Streamlit. Around 62% of organizations used at least one machine learning package that contained an unpatched vulnerability. Despite the high volume of unpatched versions, most of the vulnerabilities found were low to medium risk, with a highest severity score of 6.9 out of 10, and only 0.2% had published exploits. The researchers speculate that this and fears of breaking compatibility could be why organizations haven’t rushed to patch them yet. “It’s important to note that even low- or medium-risk CVEs can constitute a critical risk if they are part of a high-severity attack path — a collection of interconnected risks that attackers can exploit to endanger high-value assets,” the researchers wrote. Insecure configurations could expose models and data Exposing the code for machine learning models or associated training data could enable a variety of AI-specific attacks, including data poisoning, model skewing, model reverse-engineering, model poisoning, input manipulation, as well as AI supply-chain compromises where a library or an entire model is substituted with a rogue one. Attackers could attempt to extort companies by threatening to release their machine learning models or proprietary data, or they could encrypt the data to cause downtimes. The systems that train AI models usually have access to significant computer power, so attackers could compromise them to deploy cryptocurrency mining malware. For example, insecure deployments of Jupyter Notebook, an open-source computing platform for machine learning and data visualization, are regularly attacked in cryptomining campaigns. These instances are often deployed on cloud services such as Amazon SageMaker. Earlier this year researchers from Aqua Security identified an attack technique called shadow buckets that was possible because six Amazon AWS services, including SageMaker, created predictably named S3 data storage buckets. Although AWS has since changed SageMaker’s behavior to introduce a random number into new bucket names, 45% of SageMaker buckets still have predictable names potentially exposing their users to this attack. Organizations also regularly expose AI-related API access keys inside code repositories and commit histories. According to Orca’s report, 20% of organizations had exposed OpenAI keys, 35% had exposed API keys for the Hugging Face machine learning platform, and 13% had exposed API keys for Anthropic, the AI company behind the Claude family of LLMs. “Keep your API keys safe by following best practices, such as securely storing them, rotating keys regularly, deleting unused keys, avoiding hard coding keys, and using a secrets manager to manage their usage,” the researchers advised. While most organizations were found to practice principles of least privilege for running AI tools in the cloud, some continue to use overprivileged roles. For example, 4% of Amazon SageMaker instances used IAM roles with administrative privileges to deploy notebook instances. This is a risk because any future vulnerability in those services could endanger the entire AWS account due to privilege escalation. Organizations are also not quick to adopt security improvements offered by their cloud service providers. An example is Amazon’s Instance Metadata Service (IMDS), which enables instances to exchange metadata securely. IMDSv2 offers significant improvements over v1, with temporary session-based authentication tokens, but a large number of SageMaker users (77%) have not enabled it for their notebook instances. For AWS EC2 computing instances, 95% of organizations scanned by Orca have yet to configure IMDSv2. The private endpoints feature in Azure OpenAI, which protects communication in transit between cloud resources and AI services, is another example. One in three Azure OpenAI users have not enabled private endpoints, according to Orca’s findings. Most organizations don’t appear to take advantage of encryption features offered by cloud providers to encrypt their AI data with self-managed keys, including AWS Key Management Service (KMS), Google customer-managed encryption keys (CMEK), and Azure customer-managed keys (CMK). “While our analysis didn’t confirm whether organizational data was encrypted via other methods, choosing not to encrypt with self-managed keys raises the potential attackers can exploit exposed data,” the researchers wrote. Most organizations also fail to change insecure default configurations. For example, 98% of tested organizations failed to disable root access on their Jupyter Notebook instances deployed on AWS SageMaker, which means an attacker could access all models and services running in them if they obtained unauthorized access to an asset. Recommendations for more secure AI Orca researchers identify several areas where significant improvements can be made to safeguard AI models and data. First, all default settings should be reviewed, as they can open security risks in a production environment. Organizations should also read the security hardening and best practices documentation provided by service providers and tool developers and apply the most restrictive settings. Second, as with any IT system, vulnerability management is important. Machine learning frameworks and automation tools should be covered by vulnerability management programs and any flaws should be mapped and scheduled for remediation. Limiting and controlling network access to AI assets can help mitigate unforeseen risks and vulnerabilities, especially because many of these systems are relatively new and untested, the researchers advised. The same goes for limiting privileges inside these environments to protect against lateral movement in case an asset does get compromised. Finally, the data that AI models are built on and ingest is a very valuable asset. As such, it should be protected in transit between services and tools, and at rest, with self-managed encryption keys that are properly protected against unauthorized access and theft. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe