How do we operationalize AI ethics?

How do we operationalize AI ethics?

AI is about optimizing the processes not eliminating the human from them. The question of accountability remains vital in the grand idea that AI can substitute the human. Yes, technology and automated systems helped us to achieve better economic outputs in the past century, but can they substitute services, creativity, and deep knowledge? I still believe that they do not, but can optimize the time spent on developing these areas.


The accountability relies heavily on intellectual property rights, on foreseeing the impact of the technology on collective and individual rights, and on the safety and protection of the data used in training and shared while developing new models.

As we continue to advance in technology, the topic of AI ethics has become increasingly relevant in recent times. This raises important questions about how we regulate and integrate AI into society while minimizing potential risks.

I work closely with one aspect of AI — voice cloning. Voice is an important part of an individual’s likeness and biometric data that is used to train voice models. The question of how to protect the likeness (legal and policy questions), how to secure voice data (privacy policies and cybersecurity) and what are the limits of applying voice cloning (ethical questions that measure the impact) are essential to measuring while building the product.

We must consider how AI aligns with society's norms and values. AI must be adapted to fit into society's existing ethical framework, and we must ensure that it does not impose additional risks or threaten established societal norms.

The question of the impact of the technology covers the areas where AI empowers one cluster of individuals and eliminates others. It is the existential dilemma we are facing at every step of our development and societal growth or decline. Can AI feed more disinformation into the information ecosystems? Yes. How do we manage that risk at the product levels and how do we educate the users and policymakers about it? The answers are not in the dangers of technology itself but in the way we package it into products and services. And as long as we don’t have enough manpower in the product teams that look beyond and assess the impact of the technology, we would be dragged into the cycle of fixing the mess.

The integration of AI into products raises questions about product safety and how we can prevent AI-related harm. The development and implementation of AI should be done in a way that prioritizes safety and ethical considerations and this requires resource allocation to the relevant teams.

To facilitate the emerging discussion on how to operationalize AI ethics I suggest this basic cycle on how to make AI ethical on product levels:

1. Track and dive into the legal aspect of AI and how we regulate it, if those exist. These include the EU’s Act on AI, Digital Service Act, UK’s Online Safety Bill, and GDPR on data privacy. The frameworks are a work in progress and need inputs from the industry frontrunners (emerging tech) and industry leaders. See point (4) that closes the cycle I suggest.


2. Think of how we adapt AI-based products to society’s norms and don’t impose more risks. Does it affect information security or affect the job sector, does it step into the copyright and IP zone? Create a crisis scenario-based matrix. I take it from the international security background I have.


3. Think of how we integrate the two above into the products based on AI. As AI becomes more sophisticated, we must ensure that it aligns with society's values and norms. We need to be proactive in addressing ethical considerations and integrating them into AI development and implementation. If there are threats to spread more disinformation using AI-based products such as generative AI, there is a need to introduce mitigation features, moderation, limit access to the core technology and communicate with the users. It is essential to have AI ethics and safety teams in AI-based products. This requires resources and a company vision in place.


3. Think of how we communicate about AI and the products based on it. Effective communication is critical in shaping the public discourse around AI. We have seen more than a decade of challenges that technologies pose to societal and individual security and what we’ve seen is retroactive communications led by the tech teams. Quite often more communication from the products to their users may be seen as a threat to losing customers. In emerging technologies proactive communication on the policies and ethics of using AI-based products is essential. The same applies to proactive thought leadership and sharing of best practices among emerging tech and rapidly growing companies because the understanding of real caveats of the technology and expertise is there. We need to ensure that all stakeholders are engaged in this conversation and that accurate information is available to policymakers, the media, and the general public.


4. Think of how we contribute to the legal frameworks and shape them. The best practices and policy frameworks are not empty buzzwords but quite practical tools to make the new technology work as assistive tools, not as looming threats. Having policymakers, researchers, big tech and emerging tech in one room is essential to balance societal and business interests around AI. Legal frameworks must adapt to the emerging technology of AI. We need to ensure that these frameworks protect individuals and society while also facilitating innovation and progress.


This is a really basic circle of integrating Ai-based emerging technologies into our societies. As we continue to grapple with the complexities of AI ethics, it is essential to remain committed to finding solutions that prioritize safety, ethics, and societal well-being. And these are not empty words but the tough work of putting all puzzles together daily.

These words are based on my own experience and conclusions.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics