🚨 𝗦𝗽𝗼𝘁𝗹𝗶𝗴𝗵𝘁 𝗼𝗻 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗶𝗻 𝗛𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲: 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗙𝗗𝗔’𝘀 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗛𝗲𝗮𝗹𝘁𝗵 𝗔𝗱𝘃𝗶𝘀𝗼𝗿𝘆 𝗖𝗼𝗺𝗺𝗶𝘁𝘁𝗲𝗲 𝗠𝗲𝗲𝘁𝗶𝗻𝗴 🚨 On November 20-21, 2024, the FDA hosted its inaugural 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗛𝗲𝗮𝗹𝘁𝗵 𝗔𝗱𝘃𝗶𝘀𝗼𝗿𝘆 𝗖𝗼𝗺𝗺𝗶𝘁𝘁𝗲𝗲 𝗠𝗲𝗲𝘁𝗶𝗻𝗴 to tackle one of the most pressing challenges in healthcare: the regulation of Generative AI-enabled devices. This meeting marked a pivotal moment in how we think about the lifecycle of AI in healthcare, but it also highlighted critical blind spots and paradigm shifts. 🔑 Key Insights from the Meeting: 1️⃣ 𝗔 𝗣𝗮𝗿𝗮𝗱𝗶𝗴𝗺 𝗦𝗵𝗶𝗳𝘁: 𝗙𝗿𝗼𝗺 𝗣𝗿𝗲𝗺𝗮𝗿𝗸𝗲𝘁 𝘁𝗼 𝗣𝗼𝘀𝘁𝗺𝗮𝗿𝗸𝗲𝘁 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 Generative AI, unlike traditional devices or drugs, is probabilistic and dynamic, requiring robust postmarket monitoring. This marks a significant shift from the FDA's historically premarket-focused regulatory model. 2️⃣ 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 ≠ 𝗔𝗹𝗹 𝗔𝗜 It’s encouraging to see the FDA address Generative AI specifically, acknowledging that "AI" isn’t a monolith. The nuances of GenAI demand tailored regulatory approaches to avoid stifling innovation with blanket rules designed for other AI technologies. 3️⃣ 𝗧𝗵𝗲 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 A key recommendation was for full disclosure of datasets used to train AI models. However, this is often impractical, especially with off-the-shelf LLMs like Nabla's use of Whisper, trained on vast, undisclosed datasets. Regulators may need to adjust expectations to address these realities without compromising safety. 4️⃣ 𝗧𝗵𝗲 𝗛𝘂𝗺𝗮𝗻 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 𝗗𝗶𝗹𝗲𝗺𝗺𝗮 The meeting emphasized extensive human oversight, but requiring intervention for every AI decision is neither scalable nor effective. A better focus could be continuous monitoring systems to flag anomalies early, reducing risks like monitoring fatigue and unchecked human approvals. 5️⃣ 𝗘𝗹𝗲𝗽𝗵𝗮𝗻𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗥𝗼𝗼𝗺 #𝟭: 𝗔𝗜 𝗧𝗼𝗼𝗹𝘀 𝗢𝘂𝘁𝘀𝗶𝗱𝗲 𝗙𝗗𝗔’𝘀 𝗦𝗰𝗼𝗽𝗲 Many rapidly adopted AI tools — such as scribes, administrative systems, and decision-support tools — fall outside the FDA’s SaMD purview, despite their substantial downstream impact on patient safety. These tools are instead expected to be governed by other regulatory bodies like CMS, OCR, or ASTP/ONC. 6️⃣ 𝗘𝗹𝗲𝗽𝗵𝗮𝗻𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗥𝗼𝗼𝗺 #𝟮: 𝗜𝗺𝗽𝗮𝗰𝘁 𝗼𝗳 𝗔𝗱𝗺𝗶𝗻𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 How will changing administrations reshape AI regulation? A new leadership could bring shifts in priorities and policies, potentially altering the trajectory of current frameworks. 💬 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝘁𝗵𝗼𝘂𝗴𝗵𝘁𝘀 𝗼𝗻 𝘁𝗵𝗲 𝗙𝗗𝗔’𝘀 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵? https://lnkd.in/dkJ9HRpZ
CHARGE - Center for Health AI Regulation, Governance & Ethics
Health and Human Services
Boston, MA 641 followers
Exploring health AI regulation, governance, ethics, compliance & safety standards
About us
CHARGE is a community dedicated to fostering meaningful discussions on health AI regulation, governance, ethics, compliance & safety. We bring together healthcare stakeholders — including policymakers, compliance and ethics leaders, clinicians, data professionals, and AI vendors — to collaboratively explore the evolving challenges and opportunities in health AI. Through shared insights and expertise, CHARGE aims to shape a responsible, transparent, and ethical future for AI in healthcare.
- Website
-
chargeai.org
External link for CHARGE - Center for Health AI Regulation, Governance & Ethics
- Industry
- Health and Human Services
- Company size
- 2-10 employees
- Headquarters
- Boston, MA
- Type
- Educational
- Founded
- 2024
Locations
-
Primary
Boston, MA, US
Employees at CHARGE - Center for Health AI Regulation, Governance & Ethics
Updates
-
✨ 𝗩𝗼𝗶𝗰𝗲𝘀 𝗶𝗻 𝗛𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲 𝗔𝗜: 𝗧𝗼𝗽 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗶𝗻𝗴 𝗣𝗵𝘆𝘀𝗶𝗰𝗶𝗮𝗻𝘀 𝘁𝗼 𝗙𝗼𝗹𝗹𝗼𝘄 𝗼𝗻 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻 🌟 At #CHARGE, we’ve spotlighted Chief AI Officers and Health AI attorneys shaping the governance of health AI. Today, we turn our attention to practicing physicians - a critical yet often-overlooked group directly engaged in patient care, where the true impact of AI tools will be felt. Tasked with integrating these technologies into their workflows, they offer a boots-on-the-ground perspective on how AI can improve outcomes and where it may fall short. Their insights keep us focused on creating AI tools that truly enhance patient care while addressing real-world challenges. That’s why we’ve put together this list of 𝗧𝗼𝗽 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗶𝗻𝗴 𝗣𝗵𝘆𝘀𝗶𝗰𝗶𝗮𝗻𝘀 𝘁𝗼 𝗙𝗼𝗹𝗹𝗼𝘄 𝗼𝗻 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻: • Adir Sommer, MD- Ophthalmology resident at Rambam Health Care Campus, and co-developer of the #OPTICA framework for clinical AI, published in NEJM AI. • Aditya (Adi) Kale - Radiology fellow at NIHR with specific focus in AI and patient safety. • Amit Kumar Dey - Diabetes specialist, founder of #Doctors_AI, and advocate for clinical AI adoption. • Annabelle Painter - GP registrar at NHS, CMO of Visiba UK, and host of the Royal Society of Medicine Digital Health Podcast. • Benjamin Schwartz, MD, MBA - Orthopedic surgeon and a digital health advisor. • Eric Rothschild, MD - OB/GYN and advisor writing on healthcare and AI intersections. • Graham Walker, MD - Emergency Physician and AI innovation leader at The Permanente Medical Group, Inc. • Jacob Kantrowitz - Primary care physician at Tufts Medicine and co-founder and CMO at River Records. • James Barry, MD, MBA - Neonatologist and NICU director at UCHealth, and co-founder of #NeoMIND_AI. • Jesse Ehrenfeld MD MPH - Anesthesiologist, American Medical Association immediate past president, and advocate for health AI policy. • Josh Au Yeung - Neurology registrar, Dev&Doc podcast host, and clinical lead at TORTUS. • LUKASZ KOWALCZYK MD - Gastroenterologist and consultant for health AI development and strategy. • Morgan Jeffries - Neurologist and Associate Medical Director for AI at Geisinger. • Piyush Mathur - Anesthesiologist at Cleveland Clinic and co-founder of BrainX AI. • R. Ryan Sadeghian, MD,- Pediatrician and CMIO The University of Toledo, applying AI to clinical practice. • Shelly Sharma - Radiologist advancing AI applications in radiology. • Spencer Dorn - Vice Chair of the Department of Medicine University of North Carolina at Chapel Hill and AI thought leader. • Susan Shelmerdine - Radiology professor and AI advisor at The Royal College of Radiologists. • Tina Shah MD MPH - Pulmonary physician at RWJBarnabas Health and CCO of Abridge. • Yair Saperstein, MD MPH - Hospitalist at Mount Sinai Health System and founder of Avo.
-
🌐 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗧𝗮𝗰𝗸𝗹𝗶𝗻𝗴 𝗛𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲 𝗗𝗶𝘀𝗽𝗮𝗿𝗶𝘁𝗶𝗲𝘀 𝗮𝘁 𝗖𝗵𝗶𝗹𝗱𝗿𝗲𝗻’𝘀 𝗛𝗼𝘀𝗽𝗶𝘁𝗮𝗹 𝗟𝗼𝘀 𝗔𝗻𝗴𝗲𝗹𝗲𝘀 🌐 This article by Katie Palmer in STAT highlights a fascinating and impactful use case of generative AI in healthcare: Children's Hospital Los Angeles (CHLA) is piloting a program to translate discharge notes into Spanish using AI, aiming to improve care for patients with limited English proficiency. What makes this story remarkable: 1️⃣ 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗙𝗶𝗴𝗵𝘁𝗶𝗻𝗴 𝗗𝗶𝘀𝗽𝗮𝗿𝗶𝘁𝗶𝗲𝘀: While many fear AI may perpetuate bias and discrimination due to skewed datasets, this is a case where AI is actively addressing healthcare disparities. In a diverse city like Los Angeles, where 60% of CHLA’s patients speak Spanish, this program could make a critical difference. 2️⃣ 𝗣𝗿𝗲𝗽𝗮𝗿𝗶𝗻𝗴 𝗳𝗼𝗿 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 𝘄𝗶𝘁𝗵 𝗢𝗖𝗥 𝗦𝗲𝗰𝘁𝗶𝗼𝗻 𝟭𝟱𝟱𝟳: The pilot also highlights how organizations are gearing up for the nondiscrimination rule under Section 1557, set to take effect in 2025. While this specific initiative focuses on document translation requirements, it’s encouraging to see health systems like CHLA aligning with broader compliance mandates. This suggests readiness for other parts of the regulation, including provisions on AI systems and clinical algorithms. 3️⃣ 𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗶𝗻𝗴 𝗥𝗶𝘀𝗸𝘀 & 𝗣𝗶𝗼𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀: The hospital’s cautious approach reflects the current lack of universal best practices in AI compliance. By conducting biweekly audits of AI-translated discharge notes and involving patient focus groups and human translators, CHLA is setting an example of how to test and implement such tools responsibly. As CHLA’s Troy McGuire pointed out, this pilot represents the first time he’s been on board with a machine translation tool. Organizations like CHLA and Seattle Children's are transforming patient care by using generative AI to address language barriers. Beyond regulatory compliance, these efforts are setting the stage for more inclusive and equitable healthcare communication. Congratulations to Jaide Health and Joe Corkery, MD for their work on this pilot. Tools like these represent the first steps toward more equitable, accessible healthcare systems. Read the full article here: https://lnkd.in/dQunAMtd #GenerativeAI #HealthcareEquity #AICompliance #OCR1557 #AIInMedicine
-
A lot of EHRs were built without the clinician or the patient in mind. The rapid adoption driven by the #HITECH Act prioritized turning clinical documentation into claims and meeting basic Meaningful Use requirements. Unfortunately, this led to significant downstream effects like poor usability, physician burnout, and even safety issues. AI has the potential to address many of these challenges (like Graham Walker, MD also highlighted in his recent post - see comments). However, it’s equally important to ensure that AI systems themselves are developed and implemented with clinicians and patients at the center. This isn’t just about fixing the shortcomings of EHRs; it’s about preventing AI systems, now being rapidly adopted for clinical and administrative purposes, from introducing new broken workflows driven by misaligned incentives. #healthcareAI #EHR #AIgovernance #patientcenteredcare
EHRs were never truly designed for clinical care—they were built to digitize outdated paper workflows, with little regard for what clinicians actually need. And thanks to government incentives focused solely on “uptake,” these systems got the green light without ever having to improve our clinical workflows or information management. Think about it: when you look at an EHR today, what do you see? Data is sorted by type, date, or status—like an endless spreadsheet of disconnected elements. But where’s the purpose? Where’s the problem-oriented structure that reflects how we actually think in medicine? Labs, orders, and notes are all ordered and written for a reason, tagged to a medical problem or covering a set of issues, yet they’re often buried and scattered across the record in a way that loses clinical meaning. It’s time we move past these systems and start designing tools that support clinicians rather than forcing them into workflows that make little sense. The clinical information management system should be designed with problem-oriented care at its core - not data tables - where information serves its purpose and clinicians can focus on what they do best—caring for patients. https://lnkd.in/enU4ZKmn #healthcareinnovation #EHR #digitalhealth #clinicianworkflows #RiverRiver Records #medicalAI #betterCharts
-
𝗪𝗵𝗲𝗻 "𝗛𝘂𝗺𝗮𝗻 𝗶𝗻 𝘁𝗵𝗲 𝗟𝗼𝗼𝗽" 𝗕𝗲𝗰𝗼𝗺𝗲𝘀 𝘁𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 In AI governance and ethics, "human in the loop" is often held up as a safeguard—a way to ensure oversight and ethical deployment. But the 2020 Practice Fusion case reminds us of a critical truth: human involvement doesn’t always prevent harm. Sometimes, it causes it. During the height of the opioid crisis, Practice Fusion and Purdue Pharma collaborated to exploit clinical decision support (CDS) alerts in electronic health records. Instead of aiding physicians with unbiased recommendations, these alerts were deliberately engineered to push opioid prescriptions. This was no technological accident. It was a calculated strategy, where human oversight amplified harm rather than mitigating it. The consequences were significant. Practice Fusion paid $145 million to resolve criminal and civil investigations, including a $25 million fine and stringent compliance requirements under a deferred prosecution agreement. These measures aim to prevent such abuses in the future. But the broader lesson is clear: technology, including AI, is neutral - it reflects the intentions of those who control it. As we advance AI in healthcare, we must critically examine the role of human oversight. Who is the "human in the loop"? Are their motivations ethical? Governance frameworks must address not only oversight mechanisms but also the ethical accountability of the humans behind them. Sometimes, the problem isn’t the algorithm - it’s us. 📎 𝗗𝗲𝘁𝗮𝗶𝗹𝘀 𝗮𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 𝗙𝘂𝘀𝗶𝗼𝗻 𝗰𝗮𝘀𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. #AIEthics #AIGovernance #HealthAI #OpioidCrisis
-
❓ 𝗪𝗵𝗮𝘁 𝗗𝗼 𝗥𝗲𝗰𝗲𝗻𝘁 𝗘𝗹𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗠𝗲𝗮𝗻 𝗳𝗼𝗿 𝗛𝗲𝗮𝗹𝘁𝗵 𝗔𝗜 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻? ❓ As the recent elections reshape the political landscape, the implications for health AI regulation are a key question waiting to unfold. At #CHARGE, we align with the perspective of Micky Tripathi, director of Assistant Secretary for Technology Policy, who recently stated that he anticipates "a certain continuity of the policies" regardless of who takes the White House. We believe the trend toward regulating health AI will remain steady, despite political shifts. Here’s why: 𝟭. 𝗔 𝗕𝗶𝗽𝗮𝗿𝘁𝗶𝘀𝗮𝗻 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗙𝗲𝗱𝗲𝗿𝗮𝗹 𝗔𝗜 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 AI governance has been a rare area of bipartisan collaboration in Congress, highlighted by the launch of the Joint Task Force on AI in early 2024. A glimpse into the GOP's stance on AI regulation can perhaps be seen in the #Texas_Responsible_AI_Governance_Act draft, which draws from the #EU_AI_Act and addresses high-risk systems, transparency, and discrimination mitigation. With the new Republican majority, Congress could advance comprehensive legislation to establish a unified federal framework for AI governance. 𝟮. 𝗦𝘁𝗮𝘁𝗲-𝗟𝗲𝘃𝗲𝗹 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝗪𝗶𝗹𝗹 𝗣𝗲𝗿𝘀𝗶𝘀𝘁 California has already been a leader in AI governance, and now states like Texas are following suit. As federal dynamics evolve, state-driven regulation will likely remain a powerful force in shaping health AI standards. 𝟯. 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝘃𝗲 𝗢𝗿𝗱𝗲𝗿𝘀 𝗮𝗻𝗱 𝗛𝗶𝘀𝘁𝗼𝗿𝗶𝗰𝗮𝗹 𝗧𝗿𝗲𝗻𝗱𝘀 The first U.S. executive order on AI was issued by the Trump administration in 2019, emphasized "American values" and robust, trustworthy AI. President Biden expanded on this in late 2023, reflecting technological advancements and public interest. The new administration is likely to build on these foundations, maintaining a focus on safe and ethical AI. 𝟰. 𝗠𝘂𝘀𝗸’𝘀 𝗦𝘂𝗿𝗽𝗿𝗶𝘀𝗶𝗻𝗴 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 𝗳𝗼𝗿 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 Elon Musk, anticipated to play a significant role in the new administration, has been an advocate for AI restraint. His support for California’s #SB_1047 (vetoed by Governor Gavin Newsom) demonstrates his openness to pro-regulation measures in AI, even when it seems counterintuitive to his broader reputation. 𝟱. 𝗟𝗶𝘁𝗶𝗴𝗮𝘁𝗶𝗼𝗻 𝗟𝗼𝗼𝗺𝘀 𝗶𝗻 𝗮 𝗟𝗼𝘄-𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁 If federal regulatory bodies are reduced in scope, healthcare providers and payers should brace for a potential rise in patient-initiated litigation. Robust AI governance mechanisms won’t just ensure compliance - they’ll be essential for mitigating legal risks. The bottom line? While the political winds may shift, the demand for responsible health AI governance could remain strong. Whether through federal legislation, state initiatives, or increased litigation, transparency, fairness, and patient safety will stay at the forefront.
-
⚖️ FDA’s Vision for Responsible Generative AI in Healthcare 🌐 In a pivotal piece by FDA officials, Haider Warraich, Troy Tazbaz, and Robert Califf, published in JAMA, the agency outlines a forward-looking strategy for #generative_AI in healthcare. Recognizing the immense potential and unpredictability of Large Language Models, the FDA emphasizes that healthcare AI must be managed with rigorous life cycle oversight—not just at launch, but through continuous monitoring and adaptation to safeguard patients. One key aspect of the FDA’s approach is the emphasis on #post_market_surveillance, which is set to reshape AI regulation in healthcare. Unlike traditional medical devices, AI models require local, ongoing evaluation as their performance can shift over time and vary across patient populations. This will necessitate active collaboration between AI vendors and healthcare providers, such as hospitals and health systems, where these tools are deployed. This unique dynamic raises questions about accountability, as the FDA typically places regulatory responsibility solely on manufacturers. However, in the case of evolving AI tools, Commissioner Robert Califf has already hinted at a shift, stating back in September, “I think there’s a lot of good reason for health systems to be concerned that if they don’t step up, they’re going to end up holding the bag on liability when these algorithms go wrong.” The FDA’s evolving stance suggests a future where healthcare providers may play an active role in AI oversight, sharing the responsibility for ensuring safety and performance in real-world applications. This shift could redefine regulatory accountability in healthcare AI, underscoring the importance of continuous, responsible collaboration among all stakeholders. Link to the full article in the comments 👇 #CHARGE #FDA #AIGovernance #GenerativeAI #AICompliance #HealthcareAI
-
This is a strong overview of some of the key players shaping the health AI governance space. At CHARGE, we would add that ONC/Assistant Secretary for Technology Policy has already laid out a well-defined strategy in this area, especially with the upcoming HTI-1 mandate. Starting January 1, 2025, health technology vendors must ensure compliance as per the HTI-1 final rule. Specifically, ONC will certify predictive decision support interventions (Predictive DSIs) through its Authorized Certification Bodies (ONC-ACBs) within the Certified Health IT program. On the FDA side, efforts are also advancing, particularly around the regulation of generative AI technologies. The FDA’s Digital Health Advisory Committee (DHAC) will be meeting in November to discuss total product lifecycle considerations for generative AI-enabled devices. For more details, the agency has published a comprehensive Executive Summary for this meeting (link in the comments). #HealthcareAI #AIGovernance #DigitalHealth #FDA #ONC #HealthTech #CHARGE
There's a secondary AI land grab happening in health care to commercialize "governance" of AI software. You might be surprised who's staking out positions in the nascent Health AI Governance market: (All links in comments) 1. CHAI (Coalition for Health AI) was founded very recently, is very active, and has a ton of high impact academic and industry membership developing guidelines and certification frameworks. Some of the output is borderline pedantic, but they're iterating fast and trying to get practical with an "Assurance Labs" program akin to ONC-ATLs. 2. TJC (The Joint Commission) is the hospital accreditation juggernaut and in late 2023 they announced a new certification in relation to AI: Responsible Use of Health Data Certification. Doesn't seem to get into model measurement, but overlaps upstream with data use. 3. Avanade SAIGE ("Smart AI Governance Engine" from Duke, Microsoft, Accenture) was announced in the Sphere at HLTH this week and so far seems, like the Sphere itself, to be hollow. The promo video is set to an anthemic score and b-roll footage is so mid. But never underestimate Duke or Microsoft, so we'll see. The focus seems to be registration and control of AI rather than measurement, for now. Makes sense: measurement is way harder. 4. DiME Seal (Digital Medicine Society Seal) is meant to be an attestation of quality rather than a governance model, product, or service. It was announced this month and appears to have 15 products that have gone through it. 5. Aidoc BRIDGE with NVIDIA (Blueprint for Resilient Integration and Deployment of Guided Excellence). I don't know what to say about this one, still trying to make sense of it after their HLTH announcement. 6. Epic Siesmometer (their one and only open source project) is focused on measurement rather than governance, which makes a lot of sense since the EMR is the cosmic microwave background of health care workflows. Instrumentation must happen in the EMR. 7. HIMSS AMAM (Analytics Maturity Assessment Model) is... I don't know what. The EMRAM certainly had traction for EMR adoption, but it's unclear if HIMSS can use their existing distribution to drive adoption of AMAM when it seems a bit off the AI mark. Valid AI was launched out of UC Davis earlier this year, but seems to be inactive***. Surprisingly to some and perhaps expected by others, the NCQA and AMIA don't seem to have staked out clear positions or products or services just yet. ASTP/ONC and the FDA are the obvious federal forces governing this territory. It strikes me as tricky, let's say, to have so much overlap in quality assurance products and services right now when the sheriffs in town don't quite have things figured out. It's a sign we're living in the Wild West of health care AI. Time will tell where all these "governance" products and services go. My hope is we do not end up in another CQM-like boondoggle where the numbers often mean more to CFOs than patients. *** CORRECTION: Valid AI is active.
-
🚨 Healthcare AI Compliance Survey: Industry Leaders Wanted! 🚨 As the healthcare landscape rapidly evolves, regulations like the 1557 rule for decision support tools and the DOJ's updated ECCP guidelines are set to transform how AI is governed in healthcare. We’re conducting the first-ever industry-wide survey to capture how health systems and health plans are preparing for these pivotal changes. If you're a compliance leader, legal expert, or AI/data leader in healthcare, this is your chance to shape the future of AI compliance. Your insights will help set a benchmark for how the industry navigates the upcoming regulatory challenges. 🔗 Want to be a part of this? Fill out the quick form below, and we’ll reach out for a qualitative interview. Don’t miss the chance to have your voice heard in this critical industry moment. https://lnkd.in/dKUVph8m (Limited spaces — please fill out the form if you're interested) #HealthTech #AICompliance #HealthAI #HealthcareRegulation #AIinHealthcare #AIGovernance
CHARGE - Healthcare AI Compliance Survey
docs.google.com
-
The American Medical Association (AMA) recommends against physician use of LLM-based tools like ChatGPT for assistance with clinical decisions, according to immediate past president Jesse Ehrenfeld MD MPH. “Today, we don’t have confidence that those tools will actually give the right answers one hundred percent of the time,” Ehrenfeld told Fierce Healthcare. There is also no set standard for design, performance evaluation and safety, he added. In ethics opinions on quality and innovation, the AMA notes physicians should use new technology that has a proven positive impact on outcomes and calls on institutions to ensure technologies available to doctors meet the highest standards. Physicians who adopt innovations should also “have relevant knowledge and skills.” “There’s a lot of promise,” Ehrenfeld said, but safe LLMs require special-purpose models “that have guardrails around them.” Using an unregulated tool for clinical decisions today could increase physician liability, Ehrenfeld stressed. 🔗 Link in the first comment 👇