Many, many moons ago when I was a dating girl, I discovered that there were basically two categories of bad dates. There was the date that I hoped might lead to something more, but that ended with “I’m not ready to commit.” The second kind — the “TMI” date — made me want to run screaming from the room because the guy shared way, way too much about himself.
I have come to categorize most of the well-meaning attempts I encounter to report hospital mortality data in the same way: either they don’t commit to much of anything in terms of truly useful information, or they provide Too Much Information, and are nearly impossible for the general public to understand.
Before I go any further, let me be clear: I know firsthand that communicating hospital mortality data to all stakeholders, especially members of the general public, is hard — very, very hard. Regardless of this difficulty, we must find ways to communicate hospital mortality data in a clear and understandable way, so that patients and their families can make fully informed decisions.
The “I’m Not Ready to Commit” Date
Consider the following examples, from the Utah Department of Public Health website (the page headed “Hospital Quality Reporting for the Public”); the examples were generated using the AHRQ MONARHQ tool.
I call this the “we don’t really want to commit” mortality report because it tells the viewer only that this hospital (the University of Utah Hospitals and Clinics) is about the same as, or a bit better or worse than, other hospitals. According to what benchmarks is this comparison made? I’d know the answer if the question concerned blood pressure or cholesterol levels, but I have no clue what the report means in this context. How do patients and their families understand what it means to have an “average” number of mortalities? What sort of informed decisions can they make with this non-committal data?
The “Too Much Information” Date
Now consider the following bar chart from the same site. The blue bar in this graph refers to the hospital on which I searched for data; it indicates that the hospital’s mortalities were 1.97. This is better, the chart appears to show, than the average state number of 2.526 or the national number of 2.9287.
This presentation raises far more questions than it answers, among them:
- What exactly do the numbers 1.97, 2.526 and 2.9287 mean to a patient?
- Why do each bar’s numbers have progressively more digits after the decimal point (please let me know if you come up with a good answer to this one!)?
- What does the term “per 100 cases” mean? Is a case different from a patient?
- In the first display, the data were described as “average.” In this one, a new term appears: “mean.”
I categorize this as TMI of the most baffling, jargon-filled type.
People are probably trying to do the right thing here, but presenting the information in this way — no matter how extensive the detail — renders it virtually useless and makes the whole experience completely confusing.
What might be a different and, we hope, better approach?
First, if we truly want to report the information in a way that lets people make fully informed decisions about the quality of care delivered by institutions (and I believe we do), then we have to commit to language that ordinary people can understand.
Rather than labeling the hospital’s performance in the first view as “best” (a misleading term that actually means “better,” according to the hospital’s own chart legend), “average,”or “below,” I propose that two very simple pieces of information be presented using clear, specific language:
- Explain what information validated risk-adjusted models or historical data can reliably predict. And instead of discussing rates per 100, which are difficult for people to understand, we could try something like this instead:
Based on current research and validated statistical analysis of patients who have undergone bypass surgery at hospitals throughout the United States, from 2008 through 2010:
We anticipate or expect that for every 100 bypass surgeries performed throughout the U.S., two (2) patients may die.
This information takes into account the fact that some patients, those much sicker than others undergoing this surgery, may have a greater risk of dying.
- Next, compare the expected mortalities with what the historical mortality data (after statistical analysis) reveal without categorization (that is, without labels like Best, Average, Below). Consider saying something like this:
Based on current research and proven statistical analysis of information reported by hospitals from 2008 through 2010, we can report that:
For every 100 bypass surgeries performed at the University of Utah, on average, three (3) patients died.
For every 100 bypass surgeries performed at all other hospitals in Utah, on average, four (4) patients died.
At all other hospitals in the United States that performed bypass surgery, for every 100 bypass surgeries performed, on average, five (5) patients died.
By first stating what we predict may happen in a population of patients undergoing bypass surgery, we have established some baseline information, or set an expectation for patients to consider. The information is telling the viewer that some patients who undergo this surgery may die. We have provided a quantifiable result in understandable language.
The next set of statements conveys information about how the hospital being considered has performed historically. The viewer can now evaluate the number of mortalities compared (a) to the expected rate, and (b) to the actual performance of other hospitals in the state and the nation. Those seeking helpful, illuminating information have a frame of reference that empowers them to answer the key question “compared with what?”.
I have only scratched the surface of this conversation, but I am determined to help all of us to find ways to engage in a committed relationship with our patients…without sending them screaming from the room because of TMI.
By the way, I’m happy to say that there was a third kind of date, the one you marry… happy anniversary, Bret!