Monday 6 December 2010

"Invisibility, inertia and income" and patient safety

Hat tip to @KentBottles for a link to this story

I spend a lot of time thinking about the quality and safety of our healthcare system, as well as our efforts to improve it. I have written a lot about it here in this blog and in some of my peer-reviewed publications. You, my reader, have surely sensed my frustration with the fact that we have been unable to put any kind of a dent in the killing that goes on within our hospitals and other healthcare encounter locations. So, it is always with much interest and appreciation that I learn that I am not alone, and that others have had it with the criminal lack of the sense of urgency to stop this medical holocaust. For this reason, I was really happy to read Michael Millenson's post on the Health Affairs Blog titled "Why We Still Kill Patients: Invisibility, Inertia and Income". I was very curious to see how he structured his argument to boil it down to these three I's, since I think that sexy slogans and memorable triplets are the way to go. So, here is how his arguments went.

First, establish the problem. And indeed, we have been killing around 100,000 people annually since the late 1970s (and probably since before then, as you actually have to look in order to find), which amounts to the total 20-year toll of 2.5 million unnecessary deaths due to healthcare in the US. This is truly appalling. And this is just up through the 1999 IoM report! Here is what I was thinking: And if we take into account not just the killing fields of the hospital, but all of life's interfaces with healthcare, we arrive at an even more frightening 400,000 deaths annually, as known back in 2000. Multiply this by 10, and now we really are talking about a killing machine of holocaust proportions! And I completely agree with Millenson that the fact that we continue to say "more research needed" and other pablum like that is utterly and completely irresponsible. However, is this really an invisible problem? The author makes a good argument for how we minimize these numbers by failing to add them up:

I laid out those numbers in a March, 2003 Health Affairs article that challenged the profession to break a silence of deed — failing to take corrective actions — and a silence of word — failing to discuss openly the consequences of that failure. This pervasive silence, I wrote:
continually distorts the public policy debate [and] gives individuals and institutions that must undergo difficult changes a license to postpone them. Most seriously of all, it allows tens of thousands of preventable patient deaths and injuries to continue to accumulate while the industry only gradually starts to fix a problem that is both long-standing and urgent.
Nearly eight years later, medical professionals now talk freely about the existence of error and loudly about the need for combating it, but silence about the extent of professional inaction and its causes remains the norm. You can see it in this latest study, which decries the continuing “patient-safety epidemic” while failing to do next what any public health professional would instinctually do: tally up the toll. Instead, we get dry language about the IOM’s goal of a 50 percent error reduction over five years not being met.
Let’s fill in the blanks: If this unchecked “epidemic” were influenza and not iatrogenesis, then from 1999 to date it would have killed the equivalent of every man, woman and child in the cities of Raleigh (this study took place in North Carolina) and Washington, D.C. Does a disaster of that magnitude really suggest that “further study” and a “refocusing of resources” are what’s needed?
I guess this makes sense -- adding up the numbers is pretty startling, yet we are reluctant to do so. At the same time I hesitate to call this "invisible", since as you saw in a paragraph above, I just multiplied by 10! Yet I am willing to concede the first "I" to Millenson, since I do see the power in these startling numbers.


On the to the next "I", inertia. I agree with Millenson generally, and we actually know this, that physicians do not practice evidence-based medicine, and, even when it does, evidence takes decades to penetrate practice. And there is every reason to be upset that the medical profession has not rushed to adopt evidence-based prevention measures that Millenson talks about. But there is a greater subtlety here than meets the eye. True, the Kestone project is frequently held as an example of a simple evidence-based bundled intervention resulting in in a huge reduction in central line-associated blood stream infections. Indeed, this is a great success and everyone should be practicing the checklist instituted in the project by Peter Pronovost's group. What is less obvious and even less talked about is that the same approach of evidence-based bundled approach to prevention of ventilator-associated pneumonia (VAP) has also been piloted by the Keystone group, yet none of us has seen any data from that. All I have is rumors at this point, but they are not good. Why is this? Well, I have discussed this before here and here: VAP is a very tricky diagnosis in a very tricky population. This is not to say that we need not work as hard as we can to prevent it. It is just to clarify that we are not sure of the best ways to accomplish this. Is this in and of itself shameful? Well, yes, if you think that medicine is a precise science. But if you have been reading my blog long enough, you know this is not the case.


Millenson further sites his reading of the Joint Commission Journal, which has been documenting the progress within one large Catholic healthcare system, Ascension, in its efforts to reduce infections, falls and other common iatrogenic harms. By the system's account, they are now able to save over 2,000 lives annually with these measures. This is impressive. But is it trustworthy? Unfortunately, without reading the primary studies I cannot comment on the latter. However, I did publish a review of studies from this very journal on VAP prevention efforts, and here is what I found:
A systematic approach to understanding this research revealed multiple shortcomings. First, since all of the papers reported positive results and none reported negative ones, there is a potential for publication bias. For example, a recent story in a non-peer-reviewed trade publication questioned the effectiveness of bundle implementation in a trauma ICU, where the VAP rate actually increased directionally from 10 cases per 1,000 MV days in the period before to 11.9 cases per 1,000 MV days in the period after implementation of the bundle (24). This was in contradistinction to the medical ICU in the same institution, which achieved a reduction from 7.8 to 2.0 cases per 1,000 MV days with the same intervention (24). Since the results did not appear in a peer-reviewed form, it is difficult to judge the quality or significance of these data; however, the report does highlight the need for further investigation, particularly focusing on groups at heightened risk for VAP, such as trauma and neurological critically ill (25).             
Second, each of the four reported studies suffers from a great potential for selection bias, which was likely present in the way VAP was diagnosed. Since all of the studies were naturalistic and none was blinded, and since all of the participants were aware of the overarching purpose of the intervention, the diagnostic accuracy of VAP may have been different before as compared to after the intervention. This concern is heightened by the fact that only one study reports employing the same team approach to VAP identification in the two periods compared (23). In other studies, although all used the CDC-NNIS VAP definition, there was either no reporting of or heterogeneity in the personnel and methods of applying these definitions. Given the likely pressure to show measurable improvement to the management, it is possible that VAP classification suffered from a bias.
Third, although interventional in nature, naturalistic quality improvement studies can suffer from confounding much in the same way that observational epidemiologic studies do. Since none of the studies addressed issues related to case mix, seasonal variations, secular trends in VAP, and since in each of the studies adjunct measures were employed to prevent VAP, there is a strong possibility that some or all of these factors, if examined, would alter the strength of the association between the bundle intervention and VAP development. Additional components that may have played a role in the success of any intervention are the size and academic affiliation of the hospital. In a study of interventions aimed at reducing the risk of CRBSI, Pronovost et al. found that smaller institutions had a greater magnitude of success with the intervention than their larger counterparts (26). Similarly, in a study looking at an educational program to reduce the risk of VAP, investigators found that community hospital staff were less likely to complete the educational module than the staff at an academic institution; in turn, the rate of VAP was correlated with the completion of the educational program (27). Finally, although two of the studies included in this review represent data from over 20 ICUs each (20, 22), the generalizability of the findings in each remains in question. For example, the study by Unahalekhaka and colleagues was performed in the institutions in Thailand, where patient mix and the systems of care for the critically ill may differ dramatically from those in the US and other countries in the developed world (22). On the other hand, while the study by Resar and coworkers represents a cross section of institutions within the US and Canada, no descriptions are given of the particular ICUs with respect to the structure and size of their institutions, patient mix or ICU care model (e.g., open vs. closed; intensivists present vs. intensivists absent, etc.) (20). This aggregate presentation of the results gives one little room to judge what settings may benefit most and least from the described interventions. The third study includes data from only two small ICUs in two community institutions in the US (21), while the remaining study represents a single ICU in a community hospital where ICU patients are not cared for by an intensivist (23).  Since it is acknowledged that a dedicated intensivist model leads to improved ICU outcomes (28, 29), the latter study has limited usefulness to institutions that have a more rigorous ICU care model.           
So, not to toot my own horn here, and not expecting you to read the long-winded Discussion, suffice it to say that we found many methodologic errors in this body of research from the Joint Commission's own journal to invalidate potentially nearly all of the reported findings. My point is again to reiterate that unless you read each study with a critical eye and then put it into the larger context, do not believe someone else's cursory reference to the staggering improvements. I guess pertinent to our discussion, inertia, while present, is a more nuanced issue than we are led to believe.


And finally, income. I do agree that it is annoying that economic arguments are even necessary to promote a culture of prevention and safety. What I disagree with is that these economic fallacies of the C-suite impact in any way the implementation of the needed prevention systems. Most of the evidence-based preventions are pretty low tech. And although they do require teams and commitment and systems to implement broadly, small demonstrations at the level of individual clinicians are possible. Also, I shudder at the thought that a group of dedicated clinicians could not persuade a group of equally dedicated administrators to do the right thing, even at the risk of losing some revenue. 


Bottom line? While I like Millenson's sexy little "three I's of safety", I think the solutions, as is always the case when you start looking under the hood, are more complicated and nuanced. In a recent post I cited 5 potential solutions to our quality problem, and I will repeat them here:
1. Empower clinicians to provide only care that is likely to produce a benefit that outweighs risks, be they physical or emotional.
2. Reward the signal and not the noise. I wrote about this here andhere.
3. Reward clinicians with more time rather than money. Although I am not aware of any data to back up this hypothesis, my intuition is that slowing down the appointment may result not only in reduction of harm by cutting out unnecessary interventions, but also in overall lowering of healthcare expenditures. It is also sure to improve the crumbling therapeutic relationship.
4. We need to re-engineer our research enterprise for the most important stakeholder in healthcare: the clinician-patient dyad. We need to make the data that are currently manufactured and consumed for large scale policy decisions more friendly at the individual level. And as a corollary, we need to re-think how we help information diffuse into practice and adopt some of the methods of the social sciences.
5. Let's get back to the tried and true methods of public health, where an ounce of prevention continues to be worth a pound of cure. Yes, let's strive for reducing cancer mortality, but let us invest appropriately in stuffing that tobacco horse back into its barn -- getting people to stop smoking will reduce lung cancer mortality by 85% rather than 0.3%, and at a much lower cost with no complications or false positives. Same goes for our national nutrition and physical activity struggles. Our social policies must support these well-recognized and efficient population interventions.
No, they are not simple, they are not sexy, and most importantly they may be painful. Yet, what is the alternative? We must stop this massive bleeder before the American public starts thinking that the cure is worse than the disease.     

0 comments:

Post a Comment