“So What?”

Quick: imagine that you’re a nurse on a psychiatric inpatient unit asked to fill in an incident report. What do the following choices about who was injured contribute to understanding your injury rate and nurse staffing levels?

  1. RN
  2. LPN|LVN
  3. Unlicensed Nursing-Care Provider
  4. Physician
  5. Mental-Health Technician
  6. Other Health-Care Provider
  7. Social Worker, Psychologist, Counselor
  8. Resident|Intern
  9. Student
  10. Security Personnel
  11. Other Non-Health-Care Employees
  12. Another Patient
  13. Visitor of Patient
  14. Visitor of Another Patient
  15. Other
  16. No Documentation

The correct answer–worth $10,000 and a trip to Hawaii–is…”nothing.” Alas, such confusion is not unusual.

The other day as I was reviewing data elements in a nursing-quality database, I was thunderstruck by the number of questions and response categories for information like this–useless clutter, for the most part, that adds confusion and subtracts clarity while reducing crucial data that can be acted upon and help lead to improvement to almost nothing.

I spent a lot of time trying to find ways that the details sought in this example added real value for the intended end-users (hospital nursing staffs), and I am here to tell you that I didn’t find even a glimmer of one way. Rather, I’m still shaking my head at the level of detail the people who created the database ask for–requests that will undoubtedly frustrate those reporting, and detail that adds zero value for the nursing teams trying to improve patient care.

I realized as I analyzed all this that I had arrived at a very high level of WTMI-way too much information.

The data attempting to be captured in this one section of the database describe physical injuries from assault in hospital inpatient psychiatric units. According to the database documentation, the use for this particular information-capture is “to determine the rate of assaults in inpatient settings, the frequency with which the assaults result in injury, and the relationship between episodes of assault and nurse staffing levels.”

Call me crazy (others have), but I fail to see how the information I’ve displayed here satisfies the questionnaire’s goals. The question, “Who was injured in the assault?” and the sixteen (16!!) possible response categories don’t match the questions they seem designed to answer, or indeed contribute usefully to information-capture intent. (And some of the categories–such as13 and 14-are just…odd…or “really?”)

Unfortunately, I see this problem all too often. The tools to capture and report data have not been developed in a systematic and structured way with a tight focus on the goal sought. Rather, well-meaning people start down the rabbit-hole of WTMI and datasets get bigger and more onerous to complete. The good news, though, is that you can avoid the WTMI trap by faithfully reviewing and heeding the following questions when you are designing (or challenging) a dataset:

  1. What’s the question? Keep this idea in the front of your mind; when in doubt, return to it yet again. All data-developers need to ask themselves–repeatedly–not only “What is the question?” but “How do the data we seek answer it?” This is a simple technique that brings people’s focus back to the reason for capturing specific data and obliges them to test, debate, and defend how the data fit the structure and purpose of the original question.
  1. How will the answers make something happen? If the data won’t lead to action, then why are they being captured? In other words, is this information “need to know” or just “nice to know”? It should always be possible to articulate–clearly and concisely–how the data captured will inform decisions and empower end users to take action. Talk through possible uses and results; make sure that they are realistic and ring true!
  1. Is this theoretical research, or a path to immediate, real-world change? Sometimes seemingly unrelated questions are asked when someone has a specific research project in mind. If that is the situation, then it is perfectly acceptable to include data that may only promote new learning–but you need to be clear about what you hope to learn, and should make the distinctions between theoretical and practical knowledge precise for all stakeholders.

    For example, the question about who was injured may be relevant if the researcher suspects that certain groups of people are being injured more frequently than others. However, you need to make that crystal clear, so that all concerned understand the need for that category of data. If you can’t, drop such questions from the report parameters, because they don’t help to answer your central questions, and won’t lead to useful action.

Here’s an even simpler way to think about all of this: when you’re working with data, challenge yourself and your colleagues with two simple words and one profound question: “so what?” If it makes me ask, “so what?”, I don’t need it. That pretty much sums it up.

Share
This entry was posted in Data Analysis, Decision Support, Newsletters. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


three + = 9