Alliance for AI in Healthcare Responds to the European Union Artificial Intelligence Act
The Alliance for Artificial Intelligence in Healthcare (AAIH) brings together a diverse group of member organizations from the healthcare sector. This coalition encompasses biotechnology companies, health systems, venture capital firms, and academic medical centers. The AAIH presents its observations on the European Union Artificial Intelligence Act (The Act).
The AIA, similar to the Biden Administration’s recent Executive Order, takes a risk-based approach that underscores the centrality of human rights, public safety, transparency, and accountability in the implementation of responsible AI systems that can provide myriad benefits to human health and well-being. We believe this risk-based approach is the correct one for regulating AI systems across sectors like healthcare, where the potential of negative outcomes for individuals or vulnerable populations is significant and difficult to remedy after the fact. In general, we believe this approach aligns with the AAIH’s mission to revolutionize healthcare through AI, while ensuring these advancements are responsible and ethical. However, we caution that the field is moving quickly, and that if regulations are overly burdensome or make it difficult to navigate the already heavily regulated healthcare industry, it may have the effect of deterring innovations from smaller or start-up companies that have the potential to deliver significantly improved AI systems.
AAIH Analysis of Key Aspects of the EU AI Act (“The Act”):
Risk-Based Approach: The Act appropriately aims to categorize AI systems based on their risk levels, an effort especially crucial in healthcare where the implications of AI are often widely varied and profound. The differentiation between high-risk and low-risk AI applications allow a nuanced understanding of AI's impact on healthcare. However, it is imperative that the criteria for categorizing these risks are both clear and adaptable to technological advancements where systems (like real-world data surveillance and appropriate labeling of training data and performance) can help to mitigate these risks.
Data Governance and Ethics: Ensuring data access, quality, privacy, and ethical use of AI is paramount in this field. The Act's emphasis on transparency and accountability in AI operations will bolster trust in AI applications among patients and healthcare professionals. Nonetheless, balancing privacy with the need for large and diverse datasets to train AI models will require continuous dialogue and innovation. It is important to note differing priorities and values are emerging from the US, the EU, and China. This will be a topic in a follow-on report from the AAIH
Innovation and Compliance Burden: While the regulatory framework established by the Act is essential for safe and reliable AI applications, it runs the risk of imposing significant compliance burdens - especially on new startups and smaller companies, that could potentially slow down innovation. The Act must provide support mechanisms for smaller entities to navigate the existing regulatory landscape (e.g., MDR and IVDR) without stifling their ability to create new therapies, diagnostics, tools and services. The Act does provide space for local 'regulatory sandboxes' both to mitigate potential risks and assess the impacts of regulation on innovation and competition. These sandboxes will operate as a “controlled environment” for testing AI systems against the requirements of the AI Act under supervision of technical experts. One early example of this is the Agency for the Supervision of Artificial Intelligence (AESIA) in Spain - to see how they balance these competing concerns. Slated to begin operations in December, it will be important to observe how they will operate and address issues such as liability and compliance.
Healthcare-Specific Considerations: The Act must recognize the unique aspects of AI in improving healthcare costs, quality, and access, such as the role it can play in diagnostics, treatment planning, and patient monitoring. Special considerations - such as a more tailored or nuanced regulatory approach, may be necessary for certain areas of healthcare AI to continue to flourish while addressing real concerns around data bias, lack of high quality data, importing human bias, and data leakage. Regulations should recognize the real shortcoming of the status quo while trying to minimize challenges specific to algorithmic models.
Ongoing Review and Adaptation: The dynamic nature of AI technology deployed in very different settings, at both the regional, national, and international levels, necessitates that the Act be subject to ongoing reviews and intermittent adaptation, much of which will fall on the member states. By doing so, The Act will ensure that the regulations it establishes remain relevant and effective in the face of rapid technological changes.
What does EU AI Act mean for companies already complying to GDPR:
The General Data Protection Regulation (GDPR), which came into effect in the EU in 2018, already addresses several of the issues included in the Act: such as Data Protection, Privacy, Transparency, and Explainability.
Recommended by LinkedIn
For businesses that are already adhering to the GDPR, the Act might introduce further complexities. This could lead to confusion and extra, and potentially unnecessary, hurdles that companies must navigate to ensure compliance across multiple and potentially duplicative rules.
Enhanced Data Governance: Companies must now adhere to stricter data governance standards, ensuring data quality and ethical use specific to AI applications.
AI-Specific Compliance: AI-specific regulatory requirements, such as bias mitigation and transparency in algorithms, which go far beyond general data privacy laws.
Increased Scrutiny for High-Risk AI: Companies using high-risk AI in healthcare may face intensified scrutiny and stricter compliance standards, even if they already satisfy existing data privacy regulations.
Broader Scope of Accountability: The Act extends accountability beyond data privacy to include AI-specific issues like fairness, non-discrimination, and human oversight.
Need for Continuous Adaptation: Continuous adaptation to evolving standards is essential, as the Act introduces dynamic regulatory frameworks that are sure to need to be adapted and updated more frequently than traditional data privacy laws.
Conclusion:
The Act represents a significant step towards a more thoughtful international dialogue on how to construct safe and ethical AI applications, aligning well with the healthcare sector's emphasis on patient safety, human autonomy, and data protection. However, maintaining the proper balance between regulation and innovation is a delicate process and will be necessary to ensure the Act evolves with rapid advancements in AI technology.
Continuous engagement with all stakeholders within regulatory sandboxes, as the law envisions, including the healthcare sector and industry associations such as The AAIH, will be essential for the Act’s successful implementation. The AAIH, and our member organizations, stand ready to assist in this ongoing process as we ensure the field achieves its full potential, but does so while adhering to a high standard of ethics and responsible adoption.
Importantly, there are diverging priorities and values surfacing among the US, the EU, and China. These variations will be extensively explored in an upcoming white paper from the AAIH.
Strategic Growth Advisor | M&A and Venture Capital Expert | Deep Tech Executive & Board Member | Podcast Host 200+ Podcasts/Livestreams | 17x (Ultra-) Marathons Finished | Let's Connect and Drive Your Growth!
3mo💯