The Dark Side of AI — How Can The Creators Help?!
Responsible AI development: Approach, Considerations, and Frameworks for AI leaders and AI Product Teams.
Not a single day goes by these days without us learning about something astonishing that an AI tool has done. Yes, we are in unchartered territory. The AI revolution is moving forward at a blistering speed. So are the concerns and fears associated with it. The truth is — many of those fears are real!
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” — Ray Kurzweil
However, it doesn’t mean that we should be hesitant in the development of AI. The overall effect is largely positive — be it in healthcare, autonomous driving or any other application. Hence, with the right set of safeguards, we should be able to push the limits ethically and responsibly.
Here are a few considerations and frameworks that will aid in responsible AI development — for those who want to be part of the solution.
Agree upon the Principles
One of the first and vital steps in addressing these dilemmas at an organizational level is to define your principles clearly. The decision-making process becomes easier, and the probability of making decisions that violate from your organizational values becomes less, once you have your principles defined. Google has created ‘Artificial Intelligence Principles’. Microsoft has created ‘Responsible AI principles’.
OECD (Organization for Economic Cooperation and Development) has created the OECD AI Principles, which promotes the use of AI that is innovative, trustworthy, and respects human rights and democratic values. 90+ countries have adopted these principles as of today.
In 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System.
The consulting firm — PWC has consolidated more than 90 sets of ethical principles, which contain over 200 principles, into 9 core principles (see below). Check out their responsible AI toolkit here.
Build in Diversity to Address Bias
1. Diversity in AI Workforce: In order to address bias effectively, organizations must ensure inclusion and participation in every facet of their AI portfolio — research, development, deployment, and maintenance. It is easier said than done. According to an AI Index survey in 2021, the two main contributing factors for underrepresented populations are the lack of role models and the lack of community.
2. Diversity within the data-sets: Ensure diverse representation in the data sets on which the algorithm is trained on. It is not easy to get the data sets that represent the diversity in the population.
Build in Privacy
How do we ensure that personally identifiable data is safe? It is not possible to prevent the collection of data. Organizations must ensure privacy in data collection, data storage, and utilization.
Build in Safety
How do you make sure that the AI works as expected and does not end up doing anything unintended? Or what if someone hacks or misleads the AI system to conduct illegal acts?
DeepMind has made one of the most effective moves in this direction. They have laid out a three-pronged approach to make sure that the AI systems work as intended and to mitigate the adverse outcomes as much as possible. According to them, we can ensure technical AI safety by focusing on the three pillars.
Build in Accountability
Accountability is one of the hardest aspects of AI that we need to tackle. It is hard because of its socio-technical nature. The following are the major pieces of the puzzle — according to Stephen Sanford, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi.
Build in Transparency and Explainability
Explainability in AI (XAI) is an important field in itself, which has gained a lot of attention in recent years. In simpler terms, it is the ability to bring transparency into the reasons and factors that have led an AI algorithm to reach a specific conclusion. GDPR has already added the ‘Right to an Explanation’ in Recital 71, which means that the data subjects can request to be informed by a company on how an algorithm has made an automated decision. It becomes tricky as we try to implement AI in industries and processes that require a high degree of trust, such as law enforcement and healthcare.
The problem is that — the higher the accuracy and non-linearity of the model, the more difficult it is to explain
Simpler models, such as classification rule-based models, linear regression models, decision trees, KNN, Bayesian models etc, are mostly white box and, hence, directly explainable. Complex models are mostly black boxes.
It must be noted that the NIST differentiates between explainability, interpretability, and transparency. For the sake of simplicity, I have used the terms interchangeably under explainability.
When it comes to healthcare, CHAI (Coalition for Health AI) has come up with ‘Blueprint for Trustworthy AI’ — a comprehensive approach to ensure transparency in health AI. It is well worth a read for anyone in health tech working on AI systems for healthcare.
Build in Risk Assessment and Mitigation
Organizations must ensure an end-to-end risk management strategy to prevent ethical pitfalls in implementing AI solutions. There are multiple isolated frameworks in use. The NIST RMF ((National Institute of Standards and Technology - Risk Management Framework) was developed in collaboration with private and public sector organizations that work in the AI space. It is intended for voluntary use and is expected to boost the trustworthiness of AI solutions.
Long story short…
Technology will move forward, whether or not you like it. Such was the case with industrialization, electricity, and computers. Such will be the case with AI as well. AI is progressing too quickly for the laws to catch up to it. So are the potential dangers associated with it. Hence, it is incumbent upon those who develop it to take a responsible approach in the best interest of our society. On the other hand, being a tech-doomer, and striking down every bit of progress that startles you, will hamper the advancement of human civilization. Even if you are concerned for the right reasons. What we must do is to put the right frameworks in place for the technology to flourish in a safe and responsible manner.
“With great power, comes great responsibility.” — Spiderman
Now you have a great starting point above. The question is whether you are willing to take responsibility or wait for something catastrophic to happen, before the resulting regulations will force you to do so. You know what’s the right thing to do. I rest my case!
TEDx Speaker | Train Entrepreneurs and Executives Speak Confidently. Do The Work So Your Words Work For You. | Proven Delivery Techniques | Make Your Story Memorable | Magazine Writer | Certified Life Coach
10moThat great news!