Do we need an SCA equivalent for GenAI code?

Do we need an SCA equivalent for GenAI code?

Mitigating the Security Risks of AI-Generated Code

As the adoption of generative AI accelerates, software developers are increasingly leveraging AI-generated code to streamline their workflows. However, this emerging practice introduces unique security considerations that organisations must address proactively. Existing application security tools are ill-equipped to handle the complexities posed by AI code, necessitating the development of specialised solutions akin to the early days of open-source software composition analysis (SCA) tools.

Lessons from the Open-Source Era

The rise of AI-generated code bears striking resemblance to the emergence of open-source software decades ago. Initially, enterprises were hesitant to embrace open-source code due to liability concerns. However, as open-source proliferated, it became an integral part of modern software, constituting 70-90% of its composition by lines of code (not necessarily reachable code which is often more like 10%). Similarly, while some organisations are currently avoiding AI-generated code, this stance may become untenable as the technology gains widespread adoption and competitors leverage its advantages.

Just as SCA tools were developed to monitor and secure open-source components, a new market dedicated to securing AI-generated code is likely to emerge. This specialised tooling will be crucial in addressing the unique vulnerabilities introduced by AI code, which is often trained on less secure open-source repositories, leading to a "bad code in, bad code out" scenario.

The Path Forward

As AI-generated code becomes more prevalent, organisations must recognise and address its challenges. The application security industry must take proactive steps to develop solutions that can monitor and secure AI code within modern software. This may include the introduction of AI Bills of Materials (AI BOMs) to track the provenance and potential vulnerabilities of AI-generated components.

The open-source evolution provides a blueprint for software providers to adopt new technologies and practices that ensure the secure and risk-mitigated adoption of AI-generated code. By learning from the past and proactively addressing emerging security challenges, organisations can harness the power of AI while mitigating its associated risks.

Useful Links


To view or add a comment, sign in

More articles by KT B.

Explore topics