Boardroom Blueprint: AI's Cybersecurity Risks
Are Boardrooms Prepared For AI?
In this fourth thought leadership piece of the series, where I previously addressed AI-powered malware at the 2019 RSA Conference, explored the risks of AI hacking our brains in 2020 and delved into the strategies for a "good offense" in cyber defense in a recent Forbes post, my aim is to steer the focus into the boardroom this time, delineating a structured approach to managing the cybersecurity risks
Framing The Risk
The National Institute of Standards and Technology (NIST) recently unveiled the "AI Risk Management Framework" (AI RMF 1.0), a seminal document that delineates a structured approach to AI risk management, focusing on understanding and addressing risks, impacts and harms. The framework, expected to be reviewed no later than 2028, encourages responsible AI practices
At the heart of this framework are four pivotal functions: Govern, Map, Measure and Manage, each further divided into categories and subcategories to address AI risks in practice. These functions serve as a blueprint for organizations to navigate the intricate landscape of AI risks, fostering trustworthy and responsible AI development and use.
Adopting A Continuous Risk Management Approach
In the face of the burgeoning AI landscape, adopting a continuous risk management approach is not just prudent but essential. The ISACA report titled "The Promise and Peril of the AI Revolution: Managing Risk" outlines a three-step continuous Risk Management approach to foster a secure and robust AI ecosystem:
Identify Risk: Leveraging frameworks such as the AI RMF from NIST can be instrumental in this phase, providing structured and flexible guidelines for managing risks in AI systems. The identification process should also involve a thorough review of the AI landscape to pinpoint emerging threats and vulnerabilities
Define Risk Appetite: Establish an AI exploration sub-committee responsible for evaluating and prioritizing each risk based on its potential impact and the likelihood of its occurrence. This committee should work closely with different departments to understand the specific risks associated with their operations and to develop a risk appetite statement that clearly defines the level of risk the organization is willing to accept.
Monitor and Manage Risk: Form an interdisciplinary oversight team
The ISACA report goes on to outline Eight Protocols and Practices for building an AI Security Program which can provide transparency for Boards: 1) Trust but Verify, 2) Design Acceptable Use Policies, 3) Designate an AI Lead, 4) Perform a Cost Analysis, 5) Adapt and Create Cybersecurity Programs, 6) Mandate Audits and Traceability, 7) Develop a Set of AI Ethics and 8) Societal Adaptation.
Technical Takeaways: Addressing Specific AI Risks
As we delve deeper into the technicalities, it is crucial to address specific risks that generative AI brings to the fore:
Data Integrity and Hallucinations: Generative AI systems are prone to generating hallucinations—outputs that are not just incorrect but nonsensical. Ensuring data integrity is vital to prevent such outcomes.
Cybersecurity and Resiliency Impact: The integration of AI into business processes necessitates robust cybersecurity measures. Boards should advocate for special consideration to be given to business continuity in AI plans and strategies.
Recommended by LinkedIn
Ethical Considerations: The deployment of AI should be guided by a strong ethical framework, fostering the development of AI ethics.
Practical Takeaways: Action Items For Boards
To steer organizations safely through the AI revolution, boards should consider the following action items:
Develop AI Security Protocols: Implement appropriate AI security protocols to safeguard against potential threats. This involves establishing a comprehensive security framework that encompasses not only the physical and cybersecurity measures but also the procedural and personnel aspects. Boards should ensure that there is a continuous update mechanism in place to address the evolving threat landscape.
Establish AI Ethics: Develop a set of AI ethics to guide the responsible development and deployment of AI technologies. The board should foster a culture of ethical AI use
Mandate Audits and Traceability: Ensure the traceability of AI systems through regular audits to maintain transparency
Drawing from the wisdom of Chinese philosopher Laozi, "为学日益,为道日损; 老子," which essence and meaning translates to "To attain knowledge, add things every day. To attain wisdom and enlightenment, remove things every day," we are reminded of the continuous process of learning and adapting in the AI landscape. It is essential to constantly refine strategies and approaches to stay ahead of potential risks, emphasizing not just blindly adding as much data as possible to the AI model but also a conscious review and audit of the integrity and correctness of the output, and a dedicated focus of removal of inconsistencies and addressing errors in data to further evolve the system to a higher order.
As we forge ahead, the boardroom must not remain a spectator but be a proactive player, steering clear of a reactive approach and embracing a strategy that is rooted in foresight, preparedness and agility, ensuring a secure and robust AI future.
Original Source: Forbes Technology Council Post; Oct 2023
Professor Jason Lau, CISO sits on the global ISACA Board of Directors, and Chief Information Security Officer at Crypto.com, Forbes Technology Council and contributor to World Economic Forum.
With over 23 years of global experience in cybersecurity and data privacy, Jason strives to demystify the complexities of cybersecurity, and explore the intersection of cybersecurity and artificial intelligence.
Subscribe to JasonCISO Newsletter to follow emerging industry updates.
Author | Cybersecurity Architect | Evangelist | Consultant | Advisor | Podcaster | Moderator | Visionary | Speaker | Awarded Dad | Outdoor Enthusiast
1yThank you for sharing, good article 👏