UPMC has developed a virtual environment, known as Ahavi, specifically designed to validate health AI models. According to Jeffrey Jones, SVP of Product Development at UPMC Enterprises, "This is an environment that allows our organization to assess the efficacy of AI models against our patient population prior to ever having to deploy it against our actual population." https://lnkd.in/e522KfVB
CHARGE - Center for Health AI Regulation, Governance & Ethics
Health and Human Services
Boston, MA 1,109 followers
Exploring healthcare AI regulation, governance, ethics & safety standards
About us
CHARGE is a community dedicated to fostering meaningful discussions on health AI regulation, governance, ethics, compliance & safety. We bring together healthcare stakeholders — including policymakers, compliance and ethics leaders, clinicians, data professionals, and AI vendors — to collaboratively explore the evolving challenges and opportunities in health AI. Through shared insights and expertise, CHARGE aims to shape a responsible, transparent, and ethical future for AI in healthcare.
- Website
-
chargeai.org
External link for CHARGE - Center for Health AI Regulation, Governance & Ethics
- Industry
- Health and Human Services
- Company size
- 2-10 employees
- Headquarters
- Boston, MA
- Type
- Educational
- Founded
- 2024
Locations
-
Primary
Boston, MA, US
Employees at CHARGE - Center for Health AI Regulation, Governance & Ethics
Updates
-
Jason Hill, Ochsner Health’s innovation officer, said he goes to sleep most nights and wakes up most morning worried about one thing: the state of generative AI governance in healthcare. To him, providers and other healthcare organizations are in dire need of frameworks to ensure their AI tools are safe and perform well over time.
-
Certain AI regulations in healthcare mandate disclaimers when using AI. For example, California's recently enacted AB 3030 requires healthcare organizations that utilize #generative_AI to create written or verbal patient communications involving clinical information to include a disclaimer explicitly stating that the communication was AI-generated. Such disclaimers, while essential for transparency and patient trust, can create significant operational challenges for health systems. They require the meticulous registration and oversight of all AI tools, as well as the creation and management of distinct workflows for each tool to consistently communicate disclaimers to patients. In this context, the recent paper "A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent," published in The American Journal of Bioethics by Matthew Elmore, Nicoleta Economou, and Michael Pencina, provides a highly practical framework. The authors propose a structured heuristic to determine how and when to notify patients about AI use, balancing transparency and ethical considerations with operational feasibility. Their approach categorizes AI tools based on clinical risk and AI autonomy, suggesting tailored notification strategies ranging from broad institutional declarations to detailed, informed consent processes. This paper significantly contributes to simplifying compliance with regulations like AB 3030, helping healthcare providers operationalize mandatory disclaimers without compromising patient safety or trust. https://lnkd.in/gvM84ZkK
-
-
While the future of federal oversight of healthcare AI remains uncertain, state-level AI legislation continues to evolve. https://lnkd.in/d6G8BYcX
-
Primary care physicians see immense potential in AI - but they also have significant concerns, according to a recent survey by Rock Health and the American Academy of Family Physicians. While most primary care clinicians are optimistic about AI improving their clinical efficiency, workload, and personal wellbeing, they also highlighted serious concerns: ◉ 81% say they need more training to fully trust AI solutions. ◉ Nearly 70% want medico-legal protections before fully adopting AI. ◉ 64% want education on legal, liability, and malpractice risks. ◉ 68% seek ethical guidelines to ensure responsible AI use. Read more insights from the survey here: https://lnkd.in/gFpF-mCR
-
In a recent JAMA article, Peter Embí, M.D., M.S. and colleagues describe the launch of #TRAIN (Trustworthy and Responsible AI Network), a healthcare consortium established in 2024. TRAIN aims to develop collaborative governance frameworks, practical tools, and standardized approaches for AI deployment across healthcare systems. The consortium currently includes over 50 organizations, such as Vanderbilt University Medical Center, Duke University Health System, Advocate Health, UT Southwestern Medical Center, and Northwestern Medicine. The proliferation of such health system consortia focused on promoting trustworthy AI in healthcare - like the Coalition for Health AI (CHAI), Health AI Partnership, and VALID AI - highlights the pressing need for best practices in AI implementation within healthcare settings. Notably, Microsoft is among TRAIN's founding organizations, a similarity shared with CHAI, which also includes major technology companies like Microsoft and Google, as well as prominent healthcare systems actively involved in incubating AI ventures. This model, as promoted by CHAI, has previously faced criticism from Republican lawmakers who argued last year that it could place large organizations actively developing and commercializing AI models in the position of evaluating AI programs created by affiliated entities or competitors. https://lnkd.in/e58PW6Vk
-
-
Big Tech's data centres aren't just energy-intensive - they're increasingly recognized as a public health concern. While #AI_safety discussions in healthcare typically focus on direct impacts, we often overlook the hidden consequences of pollution associated with AI infrastructure. Recent research from University of California, Riverside and Caltech, led by Associate Professor Shaolei Ren, highlights this issue, revealing that pollution from data centres operated by tech giants such as Google, Microsoft, Meta, and Amazon has caused more than $5.4 billion in healthcare costs across the US over the past five years. Operating data centres requires significant amounts of electricity, much of which is generated from fossil fuels. Notably, a single #ChatGPT query consumes nearly ten times the electricity of a standard Google search. This reliance on fossil fuels results in greenhouse gas emissions linked to respiratory diseases, cancer, and other serious health conditions, particularly affecting communities situated near these facilities. https://lnkd.in/d8yxvw7a
-
According to a recent survey by the American Medical Association, 61% of physicians are concerned that health plans' increasing use of #AI for prior authorization is leading to more denials, exacerbating avoidable patient harm. “Using AI-enabled tools to automatically deny more and more needed care is not the reform of prior authorization physicians and patients are calling for,” said AMA President Bruce Scott, MD. “Emerging evidence shows that insurers use automated decision-making systems to create systematic batch denials with little or no human review, placing barriers between patients and necessary medical care. Medical decisions must be made by physicians and their patients without interference from unregulated and unsupervised AI technology.” This news comes as major insurers, including UnitedHealth and Humana, face class-action lawsuits alleging discriminatory use of AI in utilization management practices. https://lnkd.in/dNeqZgdJ
-
-
Coalition for Health AI (CHAI) is developing a model card registry to provide AI purchasers, such as health systems, with essential insights into AI models' training data, fairness metrics, and intended uses. AI vendors included in this registry receive a CHAI “stamp of approval” upon successfully completing a CHAI model card. This development by CHAI represents an important advancement in promoting transparency and streamlining AI procurement for healthcare organizations. It complements previous initiatives by Assistant Secretary for Technology Policy and aligns closely with the HTI-1 rule, which became effective in January. However, as the Fierce Healthcare article states, "The model registry does not solve the problem of validating the model, which requires evaluating the model’s performance against a locally representative data set, among other technical tests."
-
The trend of health systems appointing dedicated C-level executives to oversee AI deployment continues to expand, and for the first time, we're seeing this significant development outside the US and Canada. Sheba Medical Center, Tel Hashomer in Israel has appointed Eyal Zimlichman, MD as its first-ever #Chief_AI_Officer. This milestone highlights the increasing importance and strategic priority AI holds in both clinical and administrative functions within modern healthcare systems. Similar to how the widespread adoption of electronic medical records created the need for Chief Medical Information Officers, the emergence of Chief AI Officers signals AI's rapidly growing role and impact in healthcare delivery.
🌟 A New Era in AI and Healthcare Eyal Zimlichman, MD has been appointed as the Chief AI Officer at Sheba Medical Center, leading efforts to transform Sheba Medical Center, Tel Hashomer into an AI-powered hospital in addition to his current VP Transformation and Innovation role. In his new role, Prof. Zimlichman will drive the development of AI solutions across early detection, diagnosis, decision-making, and patient experience. Today Prof. Zimlichman is acting as VP of Transformation and Innovation and as ARC founder, he has collaborated with doctors, researchers, tech companies, and startups to advance AI projects and integrate smart solutions into healthcare systems. This appointment marks a major leap into the future of healthcare, where AI can revolutionize how we approach medical care, streamline processes, and improve patient outcomes. With AI at the core of ARC and Sheba’s vision, this is a significant step toward creating smarter, more efficient healthcare systems for tomorrow. #AI #HealthcareInnovation #ARCInnovation #ShebaMedicalCenter #Transformation #FutureOfHealthcare #AIinHealthcare
-