EIOPA Event on Artificial Intelligence Governance
On 15th December, EIPOA had an interesting event on Artificial Intelligence Governance. For those of you who could not participate, I have summarized below the most important aspects:
Key takeaways:
In more detail:
Opening keynote by Petra Hielkema – Chairperson of EIOPA
Ms Hielkema wants to continue to closely cooperate with the stakeholders and to use monitoring to understand AI in the insurance sector. AI has to be used in an ethical and trustworthy manner that brings benefits to the industry, consumers and society. Artificial Intelligence plays an essential role in the digital transformation of the insurance industry. It brings many benefits such as an increase of speed, efficiency and accuracy. However, AI also brings relevant challenges and risks for consumers.
The insurance sector and the supervisors already have extensive experience in data analytics processes in governance and risk management. However, AI has some unique characteristics that need to be incorporated into existing governance frameworks, especially in AI use cases that have a potential high impact on consumers and undertakings themselves.
EIOPA believes that ethical and trustworthy AI systems are achieved not by a single governance measure (e.g. explainability, human oversight, record keeping etc.), but rather by a combination of all them, following a risk-based approach and adapting them to the concrete AI use case at hand. In 2018 (EIOPA’s thematic review on the use of Big Data Analytics in motor and health insurance), one-third of insurance undertakings were already actively using AI and another third was at the stage of proof of concept. Next year, EIOPA will issue a new market monitoring survey to update these numbers. She expects that these numbers have gone up given digitalisation and the pandemic.
EIOPA convened an Expert Group on digital ethics in insurance, which in 2021 developed an AI governance principles report providing guidance to the sector. In the years to come EIOPA will continue this cooperation with stakeholders and providing further guidance. In addition, at international level through IAIS and its Fintech Forum, which is chaired by EIOPA, it will continue to share practices and identify what regulatory steps are currently taken. EIOPA will also seek to gather a better understanding of the implications of the trend towards increasing data-driven business models from a financial inclusion perspective.
AI systems operate already under a number of legally binding rules at international, European and national level (including the GDPR, Solvency2, Insurance Distribution Directive). Ms Hielkema stressed that existing legislation should form the basis of any AI governance framework, but the different pieces of legislation need to be applied in a systematic manner and require unpacking to assist organisations understand what they mean in the context of AI. Furthermore, an ethical use of data and digital technologies implies a more extensive approach than merely complying with legal provisions and needs to take into consideration the provision of public good to society as part of the corporate social responsibility of firms.
In addition, the Commission’s legislative proposal for an AI Act will provide a cross-sectorial legal framework for the use of AI in the EU. EIOPA supports the Commission’s risk-based approach of the AI Act. However, it also needs to be recognized that sectoral legislation is already addressing the risks of the use of AI and mandating supervisors to act when needed. Therefore, EIOPA does not support the inclusion of insurance AI use cases in the list of high-risk applications of the AI Act at this moment in time; further specification of the AI framework should be dealt with by sectorial legislation, building on the already existing sectorial governance, risk management, conduct of business and product oversight and governance requirements.
Insurance supervisors are already supervising AI risk and they know very well how their sector operates. EIOPA therefore believes that national supervisors, together with EIOPA, should remain responsible for the further development and implementation of further regulation and supervision of the use of AI in the insurance and pensions sector.
Finally, the definition of AI should not cover mathematical models that have traditionally been used and regulated in the insurance sector, including internal models. In EIOPA’s view, the definition of AI should be narrower and focus on AI systems that have distinctive features, such as machine learning approaches.
Presentation on AI Explainability by Andreas Gillhuber – Co-CEO at Alexander Thamm GmbH and Johannes Nagele – Principal AI Researcher & Consultant at Alexander Thamm GmbH
Black-box machine learning solutions replace traditional statistical models. Overall, AI can be found through the full value chain of the industry, but mostly at the claims management. In the 20th century, the typical AI was rule based. In the 21st century, the black box AI became more popular. Experts are no longer needed here, but machines are learning themselves. This type of AI increased speed, efficiency and accuracy, but was very complicated, decreasing transparency. Future methods have explainable AI (XAI) included to the system to analyse the black-box AI models, which increases transparency and interpretability. When one puts data into the black-box AI, an output is delivered by AI. The explainable AI makes the model understandable by analysing the black-box models.
Dr Nagele provided examples of car accidents. Therefore, after a car accident, one can take pictures of the damaged vehicle (this is the input data). The image and pixels highlights the image locations contributing to an explanation. It also traces back a decision to single pixels by reversing the model’s analysis of the image. This AI model provides an easy implementation of use and easy interpretation. Unfortunately, it does not explain the general decision logic and many methods need detailed knowledge of the machine learning model.
Presentation on AI Fairness by Marcin Detyniecki – Group Chief Data Scientist and Head of Research and Development at AXA
Recommended by LinkedIn
AI is very complex. Responsible AI is a standard for ensuring that AI is safe, trustworthy and unbiased. Responsible AI ensures that AI and machine learning (ML) models are robust, explainable, ethical and efficient. Several organizations have published AI principles based on values and ethics. EU guidelines for trustworthy AI include transparency, diversity, accountability, human oversight, technical robustness and data governance.
On the other hand, AI can also include unwanted bias. This is when an algorithm performs differently for sensitive sub groups. If not controlled for, the bias can be reproduced at scale without being noticed. The research community has proposed plenty of fairness metrics and bias mitigation methods. Nevertheless, there are still things that can be worked on in the future. Firstly, sensitive attributes are missing in practice. GDPR prohibits the collection and the processing of sensitive personal attributes in many cases. In addition, fairness by unawareness is insufficient due to many correlations in large datasets. Also, there are existing limitations of research proposals, such as continuous sensitive attributes, such as age, and regression problems, such as insurance pricing.
All in all, there is no one-fits-all solution for AI fairness. The best solution depends on the context of use case. Moreover, assessing and mitigating unwanted biases without the sensitive attribute is hard.
For now, a process-driven approach with human oversight (AI governance) is the best practice available. The insurance industry needs to continue to invest in research to ensure a robust and sustainable implementation of trustworthy AI.
Use case 1: AI governance in motor insurance – by Daniel John, Head of the Actuarial Department for Non-life insurance and Head of Data Analytics at Huk Coburg
AI systems can be used to analyse the data from telematics devices in motor insurance. Huk Coburg uses an AI system that reduces claims by improving driving behaviour (telematics has more impact than driving assistance systems) and that leads to possible improvement of traffic infrastructure (telematics recognizes dangerous locations). It also has a sustainably aspect, given that the AI system encourages energy efficient driving, which saves fuel. In addition, it maintains competitiveness of insurance companies. This system makes telematics very important for representing the interests of the insurance industry (data act, access to vehicle data, mobility dataspaces).
Privacy might be a concern for the AI system and data sets used by the insurer. However, to ensure privacy, driving data is hosted in a separate company in order to have high walls between the personal driving data and the traditional insurance data. There is also the right to have the data erased. In addition, there is no sale of data to third parties and there are no disadvantages in the event of a claim. This AI system is on voluntary basis with daily cancellation option for customers. The customer gets premium discount for risk averse driving of up to 30%. The assessment is based on driving data.
Mr John shared that risk assessment and pricing in relation to natural persons in the case of life and health insurance is still on the list of high risk AI in the final position of the Council. An impact assessment would show that risk assessment and pricing in life and health insurance’ covers a broad range of use cases that are no high risk at all. Therefore, the list of high risk AI should be changed in the upcoming EU negotiations on AI regulation. He argued that no insurance use case should be put on the list of high risk AI without prior and detailed impact assessment. AI needs fast and frequent development cycles. He argued that too much regulation would be the end of AI in insurance and a great disadvantage compared to competitors, such as big techs, start-ups, OEM and aggregators. There is a need for level playing field.
Use case 2: AI governance in claims liability allocation in motor insurance – by Wolfgang Hauner, Head of Group Data Analytics at Allianz SE
It is important to know who is liable in a motor accident in order to determine coverage, steer the customer, enable recovery and to route and prioritize a claim. Allianz is using AI that can be used to determine which party is liable in the event of an accident. Firstly, the data points are collected. These can be police and police and witness reports, lawyer and court letters, pictures of the accident and/or traffic circumstances for both parties. Afterwards, this data will be processed by the Smart Liability Engine (SLE) using AI. The end result of this process will be the liability detection. This system can play a key role in anticipating claim handling time, reducing waiting time for the customer and providing better service to the customer.
The use of AI has a new and engaging customer experience with personalized coverage on demand where the company match the right customers with the right products. Thanks to better risk assessment with AI, the company can push the boundaries of insurability. Using AI, claims are processed quickly, fairly, and the agents can serve customers in a personalized and effective way.
Looking at the robustness and performance, the confidence level threshold can be defined and the model performance of Allianz Italy is approximately 95%. Claims handlers can call the SLE multiple times to get the best possible statement on liability. There is also human involvement to support claims handlers. The claims handler sees the output of the SLE and triggers the next steps accordingly. The confidence level is displayed to claims handler. It is an accurate tool, which can be used in court.
Use case 3: AI governance in natural catastrophes risk modelling – by Alessandro Bonaita, Group Head of Data Science at Generali Group
AI systems can be used to enrich traditional actuarial approaches with the use of external climatic data. Using the old standard approach without AI in the case of natural disasters, only internal policy data and internal claims data were used. It is then processed by actuarial models like territorial analysis and results in low coverage and low risk profile precision given that no observations are made to analyse the whole territory and all temporal trend. If AI is added to this approach, climate data, geographical data and IoT sensors can provide additional information. It is then processed by actuarial models and AI models such as machine learning and deep learning. This results in full coverage and higher risk profile precision. External data covering all territories and time trends allow for the analysis of specific KPIs for each area. Combining results from standard actuarial and AI models significantly increases classification accuracy.
AI can significantly improve the fairness of natural catastrophe models, with external data leading to more accurate identification of risk profiles and fairer pricing. It is important that external data is carefully managed in terms of data monitoring, vendor certification and potential GDPR implications. Embedding AI into standard actuarial models can preserve transparency and explainability at acceptable levels. Models using AI have significantly more stable results due to the use of external data covering all territories. The precision of AI models is significantly improved through the ability to model more granular climate data on spatial and temporal trends.
A solid AI governance framework should cover all the steps of an AI lifecycle, from a pre-assessment of different risks, through a step-by-step methodology, to a continuous monitoring at all levels. AI Governance is not just about statistics and math. It involves a holistic approach to people, processes, technology, strategy and communication. Governance itself is only a piece of the road to responsible use of data and AI. A consistent framework based on sound ethical principles is required to carry out concrete implementation actions.
If you interested to learn more about the regulations around AI, please contact our Data Privacy Team at PwC Legal Switzerland.