Navigating AI: Legal, cybersecurity and ethical considerations for boards and leadership

Navigating AI: Legal, cybersecurity and ethical considerations for boards and leadership

 AI is quickly becoming part of business as usual, and regulators are paying serious attention to its possibilities and ramifications.  This edition of the Diligent Minute, written by Phil Lim, Director of Product Management & Global AI Champion at Diligent, explores the active measures that boards need to implement to guarantee the responsible utilization of AI.  

Phil Lim, Director of Product Management & Global AI Champion at Diligent

AI is revolutionizing various sectors, and it is imperative for boards of directors to oversee the ethical and legal use of AI technologies. The board's responsibilities extend beyond safeguarding the organization's assets; they must also maintain its reputation for compliance and ethical integrity — while still encouraging the safe use of AI to increase productivity and create value.  

 5 steps for boards to create a safe, ethical environment for AI usage  

no. 1:  Invest in AI ethics training and education  

Boards recognize the need for specialized knowledge in AI rules, regulations and compliance obligations to successfully oversee their organization's AI usage.  As such, they should be considering their own collective expertise on the matter, upskilling with AI ethics courses and staying up to date on the latest developments. By fostering their own AI knowledge, directors will be equipped to make informed decisions that align with legal standards and stakeholder expectations. 

no. 2: Bring in new perspectives  

Given the complexity and importance of AI ethics, many boards are looking at bringing on external experts to better oversee AI strategies. Be open to new insights and perspectives — both internal and external.  While your chief technology officers (CTOs) and chief information security officers (CISOs) have valuable knowledge and familiarity with the organization's systems, which is key to understanding the risks and opportunities associated with AI, they may have blind spots. External consultants can provide an independent and objective perspective as well as best practices to help the board stay on top of responsible AI use.  

 no. 3: Watch out for “AI-washing” 

The recent boom in AI has led to instances of AI-washing — companies overstating or using vague or misleading language around their AI capabilities. AI-washing may be intentional or unintentional, as companies face pressure to attract investment or to quickly adopt AI.   

When companies falsely portray themselves as using advanced AI technologies without actually implementing them effectively, they may violate regulations that require transparency and accuracy in reporting technological capabilities. Not to mention, they risk damaging trust with customers and investors. 

That’s why it’s important for a board to fully understand how their company is engaging with AI, as well as the various risks and opportunities. In addition to enhancing their knowledge through education and certifications, directors must incorporate AI into their organization’s bigger risk management picture. Additionally, by developing and maintaining policies around AI, boards can ensure adherence to regulations and maintain credibility, ultimately safeguarding their organization’s reputation and trustworthiness. 

 no. 4:  Address AI vulnerabilities in cybersecurity  

Boards should consider AI risks in the context of cybersecurity, IT security and overall enterprise risk management. Given that 36% of board directors identified generative AI as the most challenging issue to oversee, it is crucial for boards to invest in specialized training and education to understand AI’s associated risks. Bringing in external expertise and regular meetings with chief information security officers (CISOs) can help uncover risks and vulnerabilities that internal teams might miss. Additionally, boards should establish dedicated committees focused on AI and cybersecurity, and their organizations should utilize advanced tools for monitoring risk holistically. Regular audits and evaluations are essential to ensuring AI doesn’t introduce unforeseen risks.  

no. 5: Apply a principle-based approach to overseeing AI 

Boards and leaders should acknowledge that the way people feel about AI is diverse; even the most dedicated AI evangelists may experience fear of the technology. With generative AI, it can be tempting to quickly create an “acceptable AI use policy” and paste it into your policy management system, check the box and call it done.  

But this inevitably leads to the policy going unread or misunderstood by the general employee population. It’s far more effective to derive a core set of critical principles that lay the foundation for meaningful AI policies.  

For example, at Diligent, we have come up with these principles that guide all use of AI across all Diligent apps: 

Never put customer or organizational confidential data in any unapproved AI tool

Unauthorized AI services — including free tools such as ChatGPT — may regurgitate confidential data entered into it; users should only use approved, protected AI tools that have been evaluated for security risks. 

Humans are responsible and accountable for the outputs of AI 

AI responses can be inaccurate or biased, so it is essential to thoroughly review and validate the information before relying on it for any decisions or actions. 

 Don't use AI for unethical or high-risk purposes  

AI should not be employed in ways that could cause harm (e.g., generating spam) or in sensitive areas where the risks are too great (e.g., making personnel-related decisions). 

 

Consider joining me at Diligent Elevate, our user conference, for more chances to hear about the impact of artificial intelligence on the GRC landscape.  

 

To view or add a comment, sign in

Explore topics