Will a single AI Governance Regulation emerge?
Source: Midjourney with prompts to show legislators debating. Glaring illustration of gender biases in existing models.

Will a single AI Governance Regulation emerge?

Note: This was originally written as part of my short essay for my Master's module in AI, Law and Ethics. It was reproduced with slight tweaks and permission from my facilitators to repost it. The key message here would be the importance of non-state actors taking a proactive role in shaping the principles behind AI Ethics & Governance that can be followed due to differences in political views on regulating AI.

Artificial Intelligence (AI) is rapidly changing the way we live and work. Beyond being viewed simply as technology, it has become intimately entrenched into our social lives, often impacting the type of information we consume, allocation of public resources, and driving innovations in medical research, amongst many other use cases.

Given the nature of its profound impact on both individuals and societies at large, it is thus important for open dialogue around the ethical considerations to ensure the responsible development and use of such technology that does not further widen social progress.

Over the past few years, there have been attempts by various entities to create some guidance around the responsible use of AI, such as the IEEE Ethically Aligned Design Framework (IEEE, n.d.), Singapore's Model AI Governance Framework, Proposed EU AI Act (European Union, 2021), the United States AI Bill of Rights and NIST AI Risk Management Framework, the UK's pro-innovation approach to regulating AI, and China's Beijing Artificial Intelligence Principles.

In 2019, researchers Jobin et al. conducted a global survey of existing AI Ethics Guidelines. Out of the 84 documents reviewed, it was noted that there was no single ethical principle that consistently appeared in all the documents, though there were some common themes such as "transparency, justice and fairness, non-maleficence, responsibility, and privacy". Since this early landscape study, there has been a broader explosion of such guidelines, with the OECD AI Observatory database currently listing nearly 800 AI Policy initiatives from around 69 countries.

However, most guidelines today are voluntary and non-binding, with notable exceptions from the proposed EU AI Act and regulations from China relating to recommender systems and deepfakes.

As such, this article takes the position that while it is aspirational to incorporate into practice different guidelines, with the keywords being "incorporate" and "practice" - it will not likely happen soon. We will start the argument by why this aspirational goal is not possible by highlighting the challenges and drawbacks. We will then conclude with some of the benefits which suggest why the global community should still strive for continued alignment to create ethical frameworks that are principles-based and open for adaptation to cultural nuances.

CHALLENGES AND DRAWBACKS

Challenges Incorporating Differences

The possibility of incorporating ethical frameworks from different research institutes, government organizations, and industries is a complex issue. While there is a need for standardization and consistency in ethical frameworks, there are also challenges in achieving consensus among different entities. This is due to differences in values and perspectives, as well as differences in the practical implications of ethical frameworks.

The MIT Media Lab conducted The Moral Machine Experiment from January 2016 to July 2020 to study whether cultural differences influence ethical decisions on various permutations of the classical Trolley Problem. The study concluded that there are different attitudes across 3 broad cultural groups (Western, Southern, Eastern) towards factors such as gender, law-abiding victims, the value of human lives vs. pets, etc.

Most guidelines today are voluntary-based with the notable exception of the proposed EU AI Act and the regulations around recommender systems and deepfakes from China being the ones with enforceability. One of the biggest challenges and drawbacks is the absence of a single attitude towards ethical consideration that is considered correct.

Clearly from the Moral Machine Experiment, such cultural differences cannot be easily unified and no major nation would accept the unilateral decision that one's way of rule is better than the other. With anti-colonialism sentiments on the rise, and the EU AI Act’s forbidding of Chinese-style social credit scoring mechanisms, it is not likely that there will be a global consensus anytime in the near future as long as neither parties agree to consider the context of why certain positions are made.

Even within the European countries, the United Kingdom’s approach to regulating AI emphasizes a “clear, innovation-friendly and flexible approaches to regulating AI” with some researchers indicating that the National AI Strategy is a signalling document that hints at general public criticism of the EU AI Act’s rigid and impractical approach to regulations which increases business costs without necessarily improving the posture.

Challenges putting into practice

On the topic of putting such ethical frameworks into practice, the challenge revolves around the difficulty to measure real conformance and intentions. For example, in relation to AI ethics, the Singapore Model AI Governance Framework indicated that the framework “does not focus on these specific issues, which are often sufficient in scope to warrant separate study and treatment”. The complexity and interconnectedness of such principles can indeed be difficult to put into proper practice that can be effectively measured.

As another example, the Singapore Monetary Authority of Singapore released the Principles to Promote Fairness, Ethics, Accountability, and Transparency (FEAT) in the use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector in 2018. The section on Ethics covered two principles which were broadly described as having the algorithms “aligned with the firm’s ethical standards, values and codes of conduct”, and for such algorithmic decisions to be “at least the same ethical standards as human-driven decisions”. It does not address what practices are ethical and expected and avoids the difficult questions.

Researchers have also highlighted that despite the best efforts, organizations trying to incorporate ethical practices may also be undermined by unethical risks such as “(1) ethics shopping; (2) ethics bluewashing; (3) ethics lobbying; (4) ethics dumping; and (5) ethics shirking”. A highly debated case in this category is the dismissal of Google AI Researcher Timnit Gebru in 2020 after she attempted to publish a paper relating to the environmental impact of training large language models undertaken by Big Tech firms.

Big Tech firms’ position on the responsible use of technology has often been challenged as to whether these firms are prepared to make ethical choices at the expense of profits and growth. For example, it has been widely reported that while Facebook knew about the ethical concerns of their user-engagement algorithms driving hate speech and incitements to violence, these concerns were not acted upon as they drove up profits.

Another potential drawback is that incorporating ethical frameworks from different groups can lead to a fragmented and inconsistent approach to ethical considerations in AI. With so many different perspectives and interests involved, it can be difficult to create a cohesive and unified framework for ethical considerations. This can lead to confusion and ambiguity and may undermine the effectiveness of ethical guidelines.

BENEFITS OF STRIVING FOR A PRINCIPLES-BASED APPROACH

Despite these challenges, there are also numerous benefits to objectively considering ethical frameworks from different entities as they represent diverse perspectives based on some historical or cultural context. This can lead to better ethical decision-making in the development and deployment of AI, as the different entities are able to share their perspectives on ethical issues related to AI.

Another benefit of incorporating ethical frameworks from different groups is the promotion of a more inclusive and diverse perspective on AI ethics. Research institutes, for example, often have a strong academic focus and bring a wealth of knowledge and expertise to the table. Government organizations, on the other hand, are concerned with public safety and the broader social impact of AI. Industries, meanwhile, are motivated by commercial interests and the potential profits that can be derived from AI applications. By incorporating the perspectives of all these groups, we can create a more comprehensive and nuanced understanding of the ethical implications of AI.

Instead of striving for a global one-size fits all approach with specific instructions, a principles-based approach can be developed that sets the fundamental issues that need to be considered by any ethical frameworks or regulations. These can include considerations for fairness and non-discrimination, privacy considerations, rights to transparency, circumstances requiring explainability, human involvement in decision-making, equitable alternative channels for either underprivileged or unwilling participants, safeguards against violence and social cohesion, and limiting the spread of mis/disinformation.

With the general principles established, different nations can take into consideration their specific context and requirements to further develop specific requirements. It is with the hope that the earlier open dialog and understanding allow nations to openly discuss their contextual needs and avoid politicizing ethical considerations.

CONCLUSION

In conclusion, the possibility of incorporating ethical frameworks from different entities is a complex issue that requires collaboration and a principles-based approach rather than a unilateral global rules-based approach.  The benefits of incorporating ethical frameworks are significant, as they can lead to improved transparency and accountability, better alignment between entities on ethical principles, and enhanced public trust in AI.

To help facilitate these, non-state-sponsored actors with global representations (e.g. ISO, IEEE, GPAI, professional bodies) can play an important role in allowing constructive dialogues to happen.

In addition, it is recommended that future efforts be made to increase awareness and education about the importance of the ethical use of technology at various levels within the international and national context. This could include training programs for researchers and other stakeholders, as well as public outreach and engagement efforts to increase understanding of the benefits and challenges of embedding algorithms into our daily lives.

Dilyara Zaynutdinova

Head of Sales & Marketing | Business Strategy, Commercial Development Lead

3mo

Gerry, thanks for sharing!

Like
Reply

To view or add a comment, sign in

More articles by Gerry Chng

  • Wordle - Frequency analysis approach

    Wordle - Frequency analysis approach

    Like most of you, my social media feed was recently filled with strange-looking green, black, and yellow squares with…

    10 Comments
  • Transfer Learning using EfficientNet

    Transfer Learning using EfficientNet

    Summary In a recent assignment, one of the tasks was to build an image classification model to accurately classify…

  • Building a recursive Sudoku solver

    Building a recursive Sudoku solver

    So there I was, watching my wife with her new hobby. Book in hand, all engrossed with a pencil and eraser.

    8 Comments
  • The Collaboration Imperative

    The Collaboration Imperative

    The society around us has been transformed through digitalisation, and 2020 forced us to accelerate the change. This…

    4 Comments
  • Lessons learnt solving The Tower of Hanoi

    Lessons learnt solving The Tower of Hanoi

    The Tower of Hanoi. This was invented by the French mathematician Édouard Lucas in the 19th century.

    2 Comments
  • 2021 - The Great Reset

    2021 - The Great Reset

    2020 is finally coming to a close. I am not sure about you, but I find myself letting out a sigh of relief as we put…

    4 Comments
  • When on leave, be on leave

    When on leave, be on leave

    Throughout my career, I have had the privilege to work with many fantastic people. Every once in a while, you come…

    30 Comments
  • Building a Smart Nation starts with trust

    Building a Smart Nation starts with trust

    THE vision for Singapore to become a Smart Nation has topped the national agenda since its introduction in 2014. Its…

    10 Comments

Insights from the community

Others also viewed

Explore topics