JUST WHEN YOU THOUGHT YOU UNDERSTOOD PREPAREDNESS

JUST WHEN YOU THOUGHT YOU UNDERSTOOD PREPAREDNESS

"Since September 11, 2001, and the anthrax attacks that followed, the federal government has invested over $6 billion in efforts to increase the United States' ability to prepare for and respond to public health emergencies.

However, it is unclear whether these efforts have improved the nation's ability to respond to a bioterrorist attack, influenza pandemic, or other large-scale public health emergency." (RAND, 2008)

NOW, it's clear. The problem lies, in part, in the lack of agreement about what #publichealth  #emergencypreparedness (PHEP) is and how it should be measured. To help remedy this, the Department of Health and Human Services asked the RAND Corporation to convene a panel to develop "a clear and widely applicable definition of PHEP" that can provide common terms for discussion and establish a basis on which to develop a small core of critical standards and measures. 

20 years later, this problem of accountability persists. Disaster-related morbidity and mortlaity show no measures of improvement after billions in public spending.

Over the past 20 years, US local, state, and federal agencies have implemented a wide range of measures for #publichealth #emergencypreparedness . “But these efforts have not resulted in a clear picture of the nation’s preparedness owing to ambiguous and uncertain preparedness goals, a lack of agreement about what the measures should aim at and how they should be interpreted, and a weak system of accountability for producing results.” (Lurie, et al 2006, 2007)

In 2012, the #cdc initiated development of the National Health Security Preparedness Index (NHSPI) for “measuring the nation’s progress in preparing for, responding to, and recovering from disasters and other large-scale emergencies that pose risks to health and well-being in the United States”. Ten years later, studies suggest that the #NHSPI does not appear to be a valid predictor of excess COVID-19 mortality rates for 50 US states and Puerto Rico. (Keim 2020)

A research-practice gap exists across all fields of public health, including disaster-related health science. Public health has moved forward in recent years to bridge this gap. #Evidencebased #publichealth calls for knowledge of the determinants and consequences of disease, as well as the efficacy, effectiveness and costs of interventions. And yet, despite repeated urging of public health leadership, disaster epidemiology remains chiefly concerned with etiological, rather than evaluative hypotheses.

When we take a closer look, unlike other public health interventions (like prevention, health promotion and health protection), there are no good studies that link "preparedness" or "readiness" to positive health outcomes - surprisingly none...nada...zilch!

Part of the challenge in establishing a valid scale for preparedness (or readiness) stems from our reliance upon so-called #expertopinion as compared to a set of specific #empirical observations to reach the overarching conclusion. These expert panels often lack the level of accuracy and validity that is necessary for making such expansive public health investments.

Practical measures of "preparedness" and "readiness" are unlikely to occur if thay are not directly tied to indicators of health outcome (i.e. morbidity and mortality). This challenge is compounded by the polysemous nature of both of the key phrases in question, “preparedness and “readiness."

Preparedness has proven difficult to define, measure and confirm that it has an impact. Without ever fully defining preparedness, CDC is now "ready" to narrow public health's role down to that of "operational readiness" for response - another metric that does not consider health outcomes but rather pre-event measures capability - decided by committee, not science.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics