Visualizing Data

Just what the world needs — another blog.

Well, when it comes to the sharing the best practices for displaying healthcare data visually and finding and telling the story buried in your data that is EXACTLY what the world needs — a blog that delivers the information and help you've just got to have, but don't have easy access to.

And as much as I love the sound of my own voice (and I do, ask anyone) I encourage you to contribute your thoughts, questions and examples (HIPAA compliant please — I don't look good in stripes).

Let the blogging begin.

Scope of Responsibility Changes the View

I’ve been teaching a lot of data visualization workshops lately. Inevitably, when I reach the part of the day when I ask participants how they gather requirements to build a monitoring dashboard, I always get the same rote, data-analyst-centric response: “I ask my customers what questions they want answered.”

My job (or cross to bear; you decide) is to then firmly nudge them toward a new approach, one that requires them to ask instead, “What is your role and scope of responsibility? As you work in that role, what decisions must you make to achieve your goals and objectives?”

Dashboards exist to help people visually monitor – at a glance – the data and information they need to achieve one or more goals and objectives quickly and easily. This is considerably different from analyzing data to answer a specific question or to uncover potentially interesting relationships in that data.

With this construct about the purpose of a dashboard in mind, let’s consider examples of two different prototype Emergency Department (ED) dashboards designed using the same source data. We’ll ask end users to describe their role (position) in the ED, the scope of their responsibility there, and what summary information they need in deciding how to meet their goals and objectives. We’ll call this the RSD [Role, Scope, Decisions] approach.

Example A: Emergency Department Operations Manager

Here, the ED Operations Manager’s role and scope of responsibility are to ensure that patients arriving at the ED receive timely and appropriate care, and that the ED doesn’t become overloaded, thereby causing unduly long patient wait times or diversion to another facility.

Given these parameters, the chronological frame for the dashboard below is present|real time, and is focused on where and for how long actual ED patients are in the queue to receive care.

Click to expand

In the upper left-hand section of this dashboard is a summary ED Overload Score (70), overlaid on a scale of No to Extreme Overload. Under this summary are elements of the score: ED Triage (10 points), Seen by MD|Waiting for Specialty (10 points), Specialty Patients Waiting (20 points) and Waiting for In-Patient Bed (30 points). This summary provides a mechanism for the manager to monitor both the risk of overload and the key factors driving the score higher.

Additional information on the dashboard helps the Manager analyze (across census categories) the patient census, and see how many cubicles are currently in use vs. available for examination and treatment. Average wait times in minutes and by patient triage level in eight (8) categories such as Arrival to MD Evaluation (compared to a hospital goal), and ED Length of Stay (LOS) are displayed using bar graphs in the middle section. The lower left-hand display projects when additional cubicles will be available (blue signals available cubicles; orange, a shortage); the lower right-hand one shows information on patient wait times by sub-specialty.

All of this dashboard’s metrics are designed to help the ED Operations Manager identify active and potential bottlenecks, and to act to meet the objectives: delivering timely and appropriate care, and avoiding ED overload.

Example B: Emergency Department Executive Director

Here, the Executive Director’s role and scope of responsibility are to ensure that not only is the ED team providing timely and appropriate care, but that reimbursement is not forfeited because pay-for-performance (annual|contractual|third-party|value-based-purchasing) goals are missed.

In response to this role’s needs, display time frames include both Month to Date and Current and Previous YTD performance, allowing the Director to stack current performance against agreed-upon targets for metrics tied to third-party reimbursement, as well as potential opportunities for improvement.

Click to expand

At the top of the dashboard is a table of summary performance metrics (number of patients seen and treated; time in diversion due to ED Overload) for the current month vs. current and previous years, and change over time. The middle bar charts provide the Director with the current month and YTD performance compared to targets for the metrics often tied to third-party, value- based (pay-for-performance) reimbursement. The deviation graphs at the bottom of the dashboard provide context for monthly performance compared to targets trended over time.

In this dashboard, summary metrics help the ED Executive Director monitor overall performance, identify areas for improvement in delivering timely care and avoiding ED Overload, and ensure that reimbursement is not lost.

Shifting from asking your customers what questions they need to answer to asking them to describe scope, role, and decisions may seem like a distinction without a difference. It isn’t. Framing inquiries this way stimulates everyone to step back and examine what is required to support a universal, shared goal: acquiring the right information – at a glance – to work toward goals and objectives, and hit those targets, quickly, confidently, and well.

Share
Posted in Dashboards, Data Visualization, Newsletters | Leave a comment

Stop Hunting Unicorns and Start Building Teams

Guests in our home are often very generous with their compliments on my cooking skills. While I sincerely wish those compliments were deserved, the sad (and, okay, shocking) truth is that they are not.

I’m not a great cook: rather, I am an excellent assembler of food that other people have created. I know where to shop, and the way to put together terrific dishes, and I know how to pour a generous glass of wine (or three). These skills appear to convince people that I know how to cook.

Here’s another thing I’m great at assembling: fun, smart, wildly talented, highly collaborative, and productive professional teams. What’s my secret? I know that unicorns aren’t real.

Unfortunately many health and healthcare organizations, rather than working to assemble these types of teams, persist in hunting unicorns. They assume that one person can posses every skill required to create compelling and clear analysis and reporting.

These organizations need to stop the fairy-tale hunt, and start building data-analytics and communications teams. The idea that any one analyst or staff person will ever have every single bit of knowledge and skill in health and healthcare, technical applications, and data visualization and design required to deliver beautiful and compelling dashboards, reports, and infographics is just – well, sheer lunacy.

3 Tips for Building Data-Analytics and Reporting Teams

Tip 1: Search For Characteristics & Core Competencies

To build a great team, you need to understand what characteristics and core competencies are required to complete the work. Here’s where to begin:

  • Curiosity. When teams are curious they, question, probe, and inquire. Curiosity is a crucial impetus for uncovering interesting and important stories in our health and healthcare data. Above all else, you need a team of curious people! (Read my previous post about this here.)
  • Health & Healthcare Subject-Matter Expertise. Team members with front-line, boots-on-the-ground, clinical, operational, policy, financial, and research experience and expertise are essential for identifying the questions of interest and the decisions or needs of the stakeholders for and to whom data is being analyzed and communicated.
  • Data Analysis and Reporting. Without exception, at least one member of your team must have math, statistics, and data-analysis skills. Experience with data modeling is a plus if you can find it; at a minimum, some familiarity with the concept of modeling is very helpful. The ability to use data-analysis, reporting, and display tools and applications is also highly desirable, but another more technically trained IT team member may be able to bring this ability to the table if necessary.
  • Technical: IT & Database Expertise. Often, groups will confuse this skill area with data-analysis and reporting competence. Data and database architecture and administration require an entirely different set of skills from those needed for data analysis, so it’s important not to conflate the two. You’ll need team members who know how to extract, load, and transform (ETL) and architect data for analysts to use. And while you may sometimes find candidates who have both skill-sets, don’t assume that the presence of one means a lock on the other.
  • Data Visualization & Visual Intelligence. Knowledge of best practices and awareness of current research is required to create clear, useful, and compelling dashboards, reports and infographics. But remember, these skills are not intuitive; they must be learned and honed over time. And although it is not necessary for every team member to become an expert in this field, each should have some awareness of it to avoid working at cross-purposes with team members employing those best practices. (That is, everyone should know better than to ask for 3D red, yellow, and green pie charts.)
  • Project Management. A project manager with deep analytic, dashboard, and report-creation experience is ideal – and like the mythical unicorn nearly impossible to find. But don’t let that discourage you. Often a team member can take on a management role in addition to other responsibilities, or someone can be hired who, even without deep analytics experience, can keep your projects on track and moving forward.

Tip 2: Be Prepared to Invest in Training and/or External Resources

  • Why? Because they don’t teach this stuff in school.

    At present, formal education at institutions of higher learning about the best practices of data visualization, and state-of-the-art visualization and reporting software applications is scarce, and competition to hire qualified data analysts is fierce. As a result, you must be prepared to invest in training the most appropriate team members in many of these new skills, and/or working with qualified external resources.

Tip 3: Have A Compass. Set a Course. Communicate It Often.

  • The primary challenge for your team is not to simply and boldly wade into the data and find something interesting. Rather, team efforts should be aligned with the organization’s goals. This means that you must establish and communicate clear direction and objectives for everyone to deliver on from Day One. Having a compass and setting a well-defined course also help keep your teams from getting caught up in working on secondary or tertiary problems that are interesting, but unlikely to have significant impact on the main goal.

I do wish that data-analysis and reporting unicorns were real! Life would be so much simpler. But they aren’t and never will be, so I let go of that fantasy long ago. You should, too.

Share
Posted in Data Analysis, Newsletters | Leave a comment

Mental Models

Whenever I teach my “data visualization best practices” courses, I always include an introductory overview about mental models – an explanation of a person’s thought process about how something works in the real world. I do this because understanding mental models can help us construct an effective approach to solving problems and accomplishing tasks.

First, I ask course participants to think about, then describe, how they read a printed book.

The responses always include such observations as, “I look at the Table of Contents; then I turn the pages from right to left. I read the words on the pages from left to right and top to bottom. If a passage holds particular interest, I often underline it; if I come across an unfamiliar word, I sometimes look it up in a dictionary.”

Once we have gone through this exercise, I ask how they read a book on a Kindle or other electronic device. Their responses are almost identical to the first set. Turning pages and text exploration are faster and more effortless on an e-reader (if less tactilely satisfying) – but they are essentially the same processes.

Next, I ask them to weigh in on how successful they believe Amazon would have been had its designers created an e-reader that required people to process a book in an entirely new way – for example, by starting on the last page, turning pages from left to right, and reading from bottom to top. How many of you, I ask, would have even considered reading on a Kindle? Not a single hand is ever raised.

This simple, familiar example makes the point: it’s really difficult, if not impossible, to get people to change the way they think about doing something – especially when that way is familiar, and works.

As a result, the importance of uncovering and understanding the mental models of the viewers of our dashboard and reports – the way they use data and information to support their work – is essential to designing and building something of value. Quite simply, before we ever sit down to our design work at a computer screen, we must endeavor to learn as fully as possible the process by which our internal and external customers use data to make decisions about the work they do.

Let’s consider a simple example: post-discharge referrals to home health care providers by a local hospital.

How might a discharge or case manager think about – what is the mental model for – determining which patients to refer for services and where to refer them? It is highly likely (and has been confirmed based on previous work analyzing one such group’s mental model) that these managers think about and want to know the answers to such questions as:

  • are all patients who could benefit from home health care services – say, patients who might be at increased risk for readmission within 30 days – receiving referrals to them?
  • which providers are geographically closest to a patient’s home?
  • how well do different agencies perform by quality-of-care measures?
  • how do patients rate different agencies on satisfaction surveys?

Using the questions gleaned from our example discharge or case manager’s mental model as a guide, we created the following three interactive dashboards to display, highlight, and clarify data in alignment with these questions.

The first dashboard filters for a particular hospital and desired date. The top section displays summary metrics that drill down by hospital service line. The map pinpoints the ZIP code locations of home health agencies with referrals, while a bar graph quantifies referrals per agency. Each Provider Name is a hyperlink to the Home Health Agency Comparison dashboard.

(click to expand)

On the second dashboard, “At Risk by DRG,” is a summary narrative capturing statistics on missed opportunities – that is, concerning patients who may be at risk for readmission and for whom home health care may help reduce that risk; a visual trend line highlights these figures. Additionally, the data displays categories, and drills down to a specific DRG level. To the right is a payer heat map that uses color to identify those at highest risk.

(click to expand)

“Home Health Agency Comparison,” the third dashboard, shows – with an easy-to-use, side-by-side comparison tool – how HHA’s perform on publicly reported quality metrics.

(click to expand)

Far too often we blame ourselves when we fail to grasp how something new to us works, or can’t make any sense of the information we have been given in a dashboard or report. Most of the time, though, we are not to blame. Rather, the product designer or data analyst has failed to understand our mental model – the way we interact with or think about things in the real world. We end up looking for this:

And worse than banging our heads against the foolishness of paying for and being handed something we don’t want and won’t use is the inevitable result that we will simply revert to what we know: a book printed on paper, or an Excel spreadsheet – thereby missing the potential to do more and see better in a new and exciting way.

And wouldn’t that be a shame?

P.S. To view all three of these examples as interactive dashboards, click here.

Share
Posted in Best Practices, Dashboards, Data Visualization, Newsletters | Leave a comment

Red, Yellow, Green… Save It For The Christmas Tree

Listen up folks… it is time for a red, yellow, green color intervention of the most serious kind. The use of red, yellow, green to indicate performance on your reports and dashboards has reached a crisis level and can no longer be ignored.

It is time for some serious professional help.

Here is your choice: go into color rehab treatment and clean up your act or, risk losing your stakeholders attention and – even more damaging – risk obscuring important information they require to make informed decisions.

And just to be clear – you are absolutely risking these things by overusing and incorrectly using red, yellow, green color coding in your reports and dashboards. (And besides, red, yellow, green is SO last season.)

I can read your thoughts – “but that is what people ask for – they want to emulate a stoplight – they LIKE red, yellow, green.” And I liked cheap beer until I tasted the good stuff.

Let’s consider how the use of these colors is hurting and your reports and what you can do to fix it.

1. Did you know that approximately 10% of all men and 1% of all women are colorblind? Yes, it is sad, but true. So, where most of us see this:

traffic light

Our colorblind colleagues see this:

traffic light - colorblind

Which means, that when you publish a report that looks like this to the majority of us:

Medical Center Results 2010
Q1 Q2 Q3 Q4
Acute Myocardial Infarction (AMI)
Aspirin at Arrival 88% 83% 78% 83%
Aspirin Prescribed at Discharge 38% 86% 60% 86%
ACEI or ARB for LVSD 40% 70% 53% 83%
Adult Smoking Cessation Advice/Counseling 80% 80% 80% 80%
Beta-Blocker Prescribed at Discharge 89% 92% 89% 87%
Fibrinolytic Therapy Received Within 30 Minutes of Hosp Arrival 98% 98% 98% 97%
Primary PCI Received Within 90 Minutes of Hospital Arrival 86% 86% 86% 65%

There are about 10% of the men and 1% of women who will only see this:

Medical Center Results 2010
Q1 Q2 Q3 Q4
Acute Myocardial Infarction (AMI)
Aspirin at Arrival 88% 83% 78% 83%
Aspirin Prescribed at Discharge 38% 86% 60% 86%
ACEI or ARB for LVSD 40% 70% 53% 83%
Adult Smoking Cessation Advice/Counseling 80% 80% 80% 80%
Beta-Blocker Prescribed at Discharge 89% 92% 89% 87%
Fibrinolytic Therapy Received Within 30 Minutes of Hosp Arrival 98% 98% 98% 97%
Primary PCI Received Within 90 Minutes of Hospital Arrival 86% 86% 86% 65%

2. Additionally, without a column that indicates what the red, yellow and green thresholds mean (goal or benchmarking data) the viewer has no way of knowing when a measure rate changes. What is the rate that will change the color in this report to green? Or yellow? Or (oh horrors!) red?

And since when is red a “bad” color? It simply means stop on a traffic light – a very good thing for managing traffic. Red can symbolize fire, passion, heat and in many countries it is actually a symbol of good luck… but I digress.

Using all the red, yellow and green also breaks the big data display design rule – which is:

Increase the DATA INK and decrease the Non-Data INK

The data, data, data is what it is all about – not colors, gridlines and fanciful decoration.

So what can you to do without your stoplight colors in order to draw viewer’s attention to important data? Plenty…

You can eliminate all of the non-data ink and add data-ink to the areas of importance by:

  • Italicizing and bolding
  • Using soft hues of color to highlight data
  • Applying simple enclosures to denote the data as belonging to a group that needs attention paid.

You can do all of these things as I have below or just one or two depending on how much data you have in your table.

Medical Center Results 2010
Acute Myocardial Infarction (AMI) Q1 Q2 Q3 Q4 Target
Aspirin at Arrival 88% 83% 78% 83% 80%
Aspirin Prescribed at Discharge 38% 86% 60% 86% 80%
ACEI or ARB for LVSD 40% 70% 53% 83% 80%
Adult Smoking Cessation Advice/Counseling 80% 80% 80% 80% 80%
Beta-Blocker Prescribed at Discharge 89% 92% 89% 87% 85%
Fibrinolytic Therapy Received Within 30 Minutes of Hosp Arrival 98% 98% 98% 97% 95%
Primary PCI Received Within 90 Minutes of Hospital Arrival 86% 86% 86% 65% 85%

This method of displaying the data is much easier on the eyes and brain – it is far less jarring and allows the viewer to focus on the information that is important.

You could also simply sort and categorize the data to show where improvement is required versus where things are going well. Consider the following example report for Q3 results:

Medical Center Results 2010
Acute Myocardial Infarction (AMI) Q1 Q2 Q3 Target
Measures Requiring Improvement:
Aspirin at Arrival 88% 83% 78% 80%
Aspirin Prescribed at Discharge 38% 86% 60% 80%
ACEI or ARB for LVSD 40% 70% 53% 80%
Measures that Meet or Exceed Target:
Adult Smoking Cessation Advice/Counseling 80% 80% 80% 80%
Beta-Blocker Prescribed at Discharge 89% 92% 89% 85%
Fibrinolytic Therapy Received Within 30 Minutes of Hosp Arrival 98% 98% 98% 95%
Primary PCI Received Within 90 Minutes of Hospital Arrival 86% 86% 86% 85%

By arranging the report in this way I have eliminated the viewers need to hunt and peck and synthesize the measures that require improvement. They are at the top of the report and clearly and simply displayed.

Now go back and take a look at red, yellow, green table – check your pulse and note if your jaw is clenched. Look at the newly designed data tables – I bet you feel calmer already.

And if you were wondering how colorblind people manage to drive it is because of the order of the lights. They know that red is first, then yellow and green. If the lights are arranged horizontally though, all bets may be off and you should proceed with caution… lots and lots of caution…

Share
Posted in Communicating Data to the Public, Data Visualization, Design Basics, Know Your Audience, Newsletters, Using Color | Leave a comment

Twitter Me This

Time for a confession: I’ve been a Twitter skeptic from day one.

Even though I understand how it works (140-character electronic updates – “Tweets” – that people post for their followers – friends, family, political junkies – and that fill the gaps between other types of communications, such as e-mail and blog postings), I’ve still wondered, “Why would I want to do that?”

It’s only after experiencing Twitter over time that I’ve come to understand its value. And these real-world experiences have made me care about Twitter in a way that neutral facts or statistics never could. 140 characters cleverly arranged are much more than friendly updates. In some cases, they have enormous influence – good, bad, and occasionally ugly (you know it’s true). Tweets can be powerful.

In reflecting on my skepticism about Twitter, I also realized that I had been a bit of a hypocrite (a Twittercrite?): almost daily, I use display devices such as Sparklines (to name only one) that condense lots of data into one concise display – a sort of “Twitter for data visualizations.”

And as happens with Twitter, once I began using them regularly, it became clear that, deployed in a clever and correct way, this “condensing and concentrating” type of display tool could empower me to deliver far more information on my dashboards and reports than could other methods.

Edward Tufte coined the term “Sparkline” in his book Beautiful Evidence: “These little data lines, because of their active quality over time, are named sparklines – small, high-resolution graphics usually embedded in a full context of words, numbers, images. Sparklines are datawords: data-intense, design-simple, word-sized graphics” (47).

Typically displayed without axes or coordinates, Sparklines present trends and variations associated with some measurement of frequent “sparks” of data in a simple, compact way. They can be small enough to insert into a line of text, or several Sparklines may be grouped as elements of a Small-Multiple chart. Here are a few examples.

Example 1: Patient Vital Signs

Here, 24-hour Patient Vital Signs (blood pressure, heart-rate, etc.) are displayed in the blue Sparkline, along with the normal range of values, displayed in the shaded bar behind them. To the right of the Sparkline is a simple table that shows the median, minimum, and maximum values recorded in the same 24-hour time-frame.

Click to expand

Click to expand

This basic display delivers a lot of valuable information to care-givers monitoring patients, making it clear that during the same period around the middle of each day, all of the patients’ vital signs fall outside normal ranges.

Example 2: Deviation from Clinic Budget

In this second example, we used a deviation Sparkline to show whether use of available surgical-center hours at three different locations is above or below budget. We added two colors to the Sparkline to make clear the difference between the two values (blue for “above”; orange for “below”) within a rolling 12-month time-frame.

Click to expand

Click to expand

Example 3: Deviation From Hospital Budget

Here we created a deviation Sparkline to show the departure from hospital budget numbers across several metrics (“Average Daily Census” and “Outpatient Visits,” for two examples), but instead of using brighter colors to indicate where the performance falls, as we did in Example # 2, we have chosen a pale gray shade to indicate when daily real performance drops below projected targets.

Click to expand

Click to expand

Please note as well that in each of these three examples, we have embedded the Sparklines into the display and provided context through the use of words, numbers, and icons. We do this because most of the time Sparklines cannot stand on their own; rather, they require some additional framework to convey information and signal value to the viewer.

Finally: although I have been a Twitter skeptic, HDV does have a Twitter account at @vizhealth and Tweets occasionally about things that interest us, or what the company is up to. Take a look!

Share
Posted in Communicating Data to the Public, Data Visualization, Newsletters | 1 Comment

“Time is [Not] on My Side”: Using Time Efficiently When Developing Dashboards

In 1657, the French mathematician and philosopher Blaise Pascal apologized for a very lengthy letter he had written thus: “This letter is so long only because I had no time to make it shorter.” Over 350 years later, his words still resonate with me. I imagine that they do with you, too.

It’s hard work to communicate one’s thoughts and ideas briefly yet completely in writing. We need time – time to think, to try out different words and phrases, to solicit feedback, to edit (and edit some more). The same is true about the process of building overview monitoring dashboards to display healthcare data in a clear and compelling way: we need time – to grasp the underlying data and compose meaningful summaries; time to discover the best medium for and arrangement of the data, time to solicit feedback, to edit and refine the whole.

Sadly, we each only get 24 hours a day; I can’t help you much there. But I can share with you the approach I use, one I believe will help you use your time more efficiently as you develop really great overview dashboards.

The most important thing to do is step away from your computer and acquaint (or re-acquaint) yourself with both your colleagues and your whiteboard and markers. This idea doubtless sounds antiquated, but acting on it will make a huge difference: it frees you from worrying about or being distracted by the values in the data, and permits you to think contextually about the dashboard you need to create. Liberating your eyes from the computer screen lets you exchange ideas with your colleagues; using a whiteboard means that if something isn’t working, you can erase it and start over – no harm, no foul.

Think of all that this freeing step allows you to consider:

  • The scope and role of the people using the overview dashboard, and what decisions they need to make. Are they responsible for many facilities and departments? If yes, then they need an overview summary dashboard that lets them monitor several locations on a single page (read more about this here).
  • The data categories you have to work with. I can’t emphasize enough that you must NEVER skip this step: it is essential to being able to summarize and organize the data on an overview dashboard in a correct and meaningful way based on your viewer’s scope, role, and decisions.
  • The context: “Compared with what?” Everything in data analysis and display is anchored in comparison. Do you have budget data, targets, previous results and/or group comparisons? If you can’t answer the question “compared with what?” your viewers will invariably end up saying “so what?”

Here’s a brief example of what I mean:

Imagine we’ve been asked to create an overview dashboard for the senior Quality Director for a multi-system organization. Asking the questions suggested above, we discover that

  • her scope and role encompass numerous facilities and multiple performance measures. The high-level decisions she must make include identifying groups that may need help improving their performance on quality measures, and determining if there are measures that all groups need help on.
  • the data categories she works with include the institutions delivering care and the quality metrics required by regulatory groups.
  • the context of her decisions includes monitoring groups and measuring performance in comparison with each other. She has set performance targets for each group and each measure.

From a first review of the data, you learn that there is historical and current information available that may be categorized by facilities and measures.

Armed with the crucial gleanings summarized above from careful research and review, you can create an overview dashboard that will allow the director to consider the data in two useful, revelatory ways:

  • Facilities: Anchoring the first view of the data to each facility allows the director to rate each facility’s performance, identifying those doing well, and those that need to improve. This view helps the director consider “whom” she may need to focus on.
  • Measures: The bottom half of this example is organized by each measure, and allows the director to discern specific measure(s) that some or all facilities need to improve upon. This view enables a high-level identification of “what” should be the focus of improvement efforts.

summary-comparison

In this very simple example, I have created a summary view that lays out for the director, at a glance, the best- and worst-performing facilities, and makes it easy to quickly spot which facilities need her focused and immediate attention. My second display highlights specific measures needing improvement at multiple facilities.

Of course, supporting zoom reports are required to understand the underlying details, but at a very high level, this approach helps point the director in the right direction as she monitors results over time.

I wish the refrain from the song “Time is on My Side” (Mick Jagger and the Rolling Stones) were true in this case; alas, “It’s only Rock ‘n’ Roll.” (But I still like it.)

Share
Posted in Best Practices, Dashboards, Newsletters | Leave a comment

The Hammock Chronicles (2016 Edition)

Long-time subscribers to this newsletter know that every August, my family and I retreat to Bustins Island in Casco Bay, Maine, where I commandeer the hammock and contemplate life (along with the inside of my eyelids). (Check out previous posts from my island hammock here.)

Very little has changed about Bustins Island since I first starting coming to it 31 years ago with my husband, Bret. One thing that has, however, is that after all those summers I know the Island drill, as does my family, and we have a highly streamlined system for packing, travel, and enjoying our stay. We know exactly what to take for a fab vacation: good food and wine, bathing suits, some great books. And that’s it.

Musing on the accomplishment of having exactly what we need on the Island got me thinking (from the hammock, of course) about dashboard and report design, and the experience and discipline it takes to display exactly the right information needed for informed decisions – nothing more nor less.

Finding this balance can take a little time, and is often the result of trial and error; but once you commit to a method and approach, you will find your work much easier. The following “Three Commandments” will remind you to try techniques I have found extraordinarily helpful.

Document and Diagram. Take time to understand your clients’ mental models of the way they use data in their work. A diagram like the one below will guide you and them through the documentation of precisely what information should display on an Overview Summary Dashboard (for monitoring performance); Zoom Reports (for more in-depth analysis on a specific dimension); and Details Lists.

overview-summary-dashboard

(click to expand)

Talk it Through. If you get stuck (and you will!), ask your clients to describe specific situations in which they used data that you’d displayed for them; or to recall scenarios when they would have found additional information helpful. Often this clarifies for both of you how and at what stage in the thought process information will be most supportive, and the precise spot it will be displayed most effectively. Sometimes after this process, a metric may even be judged of no value, and can be deleted to free up precious space.

Build Simple Prototypes; Be Engaged. Okay, this sounds obvious – but you must engage your clients, honestly and clearly discussing what will be created and using basic prototypes to guide you through several rounds of brainstorming before everything starts to crystalize. Remember though that there is a fine line between “tossing around ideas” and “over-thinking”: you also need to know when to stop prototyping, and build and deploy. The perfect is the enemy of the possible.

It took our family years to figure out exactly what we needed on Bustins Island for a great vacation – and of course that “exactly” has changed over time as our children have grown and our interests evolved.

But once we thought deeply about what was important to us, and what material objects would only enhance enjoyment of that, we were able to enjoy our vacation to the fullest – supported by just the right type and amount of stuff.

The absolute same thing is true about reports and dashboards. Once you determine what your clients need and hope to accomplish, you can collaborate with them to choose just the right supporting data and information, and the most effective way to display those – nothing less, nothing more.

Share
Posted in Best Practices, Communicating Data to the Public, Dashboards, Newsletters | Leave a comment

2016 Summer Reading List for the Healthcare Data Geek

I don’t know about you, but I was shocked when I glanced at the calendar and realized that it was time for my annual summer reading list for healthcare data geeks! HULLO! Wasn’t it October just last week?

Anyway, without any further delay, here are my recommendations for you to consider, take to the beach, or peruse while hanging in a hammock (my favorite summer pastime, as many of you know).

The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t

the-signal-and-the-noise

It only seems fitting that I’m writing about this book while the movie Money Ball is playing on the television in the background. That’s because Nate Silver, the author of The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t, first gained public notice when he devised a new and innovative method for predicting the performance of baseball players, which by the way, he later sold for a tidy sum to the Web site, Baseball Prospectus.

No slug, Silver then went on the win big at the poker tables and to reinvent the field of political predictions (check out his blog Five Thirty-Eight, named as a nod to the number of votes in the Electoral College).

Silver doesn’t spend much time explaining to the reader how he develops his prediction algorithms. Rather, he evaluates predictions in a variety of fields, including epidemiology, weather and finance. For example, he explains the reason why we are not even close to predicting the next killer bird flu epidemic or the next ruinous earthquake.

If you are hoping to find a book that has riveting tales about success in the face of long odds, this isn’t it. But if you are interested in gaining great insights into the pitfalls of the increasingly over-used term, “predictive analytics,” this book is for you.

When Breath Becomes Air

when-breath-becomes-air

Although this book may seem out of place on a reading list for healthcare data geeks, I would argue it is absolutely appropriate and one that you should give serious consideration to reading.

When Dr. Paul Kalanithi sent his best friend an email in May 2013 disclosing that he had terminal cancer, he wrote: “The good news is that I’ve already outlived two Brontës, Keats and Stephen Crane. The bad news is that I haven’t written anything.” But that changed in a powerful and beautiful way – with only 22 months left to live, Dr. Kalanithi, who died at age 37, overcame pain, fear and exhaustion to write this profoundly insightful and moving book, When Breath Becomes Air.

On an emotional level, this book is unbearably tragic. But it is also an oft-needed reminder that all of healthcare is and should be about the people we serve with our work, whether directly like Dr. Kalanithi or indirectly in supporting roles. As I read about Kalanthi’s struggle to understand who he became once he could no longer perform neurosurgery, and what he wanted from his remaining time, my heart broke. And when I read about a terrible period of time when his oncologist was away and he was treated by an inept medical resident who nearly hastened his death by denying him one of the drugs he desperately needed, my anger left me fuming at the unconscionable gaps in our healthcare systems.

Here’s the bottom line – I am including this book on my list this year because it is beautifully written and because it is a sober reminder about the importance of our collective efforts to deliver high quality, compassionate care to all who seek medical attention. We must continue the fight and Dr. Kalanthi’s story serves to remind us exactly why.

Nudge: Improving Decisions About Health, Wealth, and Happiness

nudge
Nudge – to give (someone) a gentle reminder or encouragement. – Webster’s Dictionary

As I was packing away some of the books on my office shelves (to make room for new books), I couldn’t help but stop and flip through Nudge one more time.

Few people will be surprised to learn that the setting in which individuals make decisions often influences the choices they make.

How much we eat depends on what’s served on our plate. The items we pick from the cafeteria line correspond with whether the salads or the desserts are placed at eye level (although I can argue that no matter where dessert is, I will find it). The magazines we buy depend on which ones are on display at the supermarket checkout line.

But the same tendency also affects decisions with more significant consequences: How much families save and how they invest; what kind of mortgage they take out; which medical insurance they choose; which cars they drive.

Behavioral economics, a new area of research combining economics and psychology, has repeatedly documented how our apparently free choices are affected by the way options are presented to us.

This practice of structuring choices is called “choice architecture” and Richard Thaler and Cass Sunstein’s book is an insightful journey through the emerging evidence about how decisions are made and how individuals and policy makers can make better ones.

Thaler and Sunstein apply the principles of choice architecture to a few problems in healthcare:

  • How could Medicare part D be improved?
  • How can organ donation rates be increased?
  • Why shouldn’t patients be allowed to waive their right to sue for medical negligence in return for cheaper health care?

But the concepts in the book go well beyond their specific examples and could prove very useful to practicing clinicians who, the authors note, are often in the position of being choice architects for their patients.

Although there is still a lot of work to be accomplished (a lot), some of the principles of choice architecture are beginning to find their way into projects to promote better care, ensure better health outcomes and lower costs. These include, but are not limited to:

  • Alignment of incentives with desired and measurable outcomes (e.g., improved provider reimbursement for the active and measurable care coordination of diabetic patients).
  • Default care options that support better health practices (e.g., childhood immunizations).
  • Communication about care and treatment choices and their associated outcomes in patient-friendly formats (e.g., structure and well supported, informed, decision making programs).
  • Systems that expect and therefore are designed to prevent, detect and minimize errors and improve patient compliance (e.g., pill cases and inhalers with dosage counters, alerts and reminders).

Nudge still holds up on a second reading – the examples are interesting and fun. More important, the information in this book absolutely has the potential to change the way you think about healthcare systems and the delivery of patient care.

And there you have it – my 2016 summer reading list for geeks.

But before I sign-off, I want thank
all of my dedicated newsletter subscribers; the purchasers of The Best Boring Book Ever series; and the greatest clients one could have for the chance to work together as we endeavor to “show and see” new opportunities to improve healthcare. We look forward to visualizing even more healthcare with you.

Happy reading!

Share
Posted in Books, Newsletters | Leave a comment

Raising the Bar on Stacked Bar Charts

Unfortunately, and more often than is good for my mental health, I encounter data being ineffectively displayed in stacked bar charts. Which phenomenon leads to my “Question of the Day”: When should we present our healthcare data in a stacked bar chart versus some other display form? (A quick thanks, before I forget, to data-viz expert Steve Few for his recent insightful post on this subject.)

As with all charts, we need to think first about the different types and characteristics of the data we are working with. (Are we seeing a time series? An interval? Nominal?) What do we need to tell our viewers? Do we need them to

  • Understand the distribution of the data and whether or not it is skewed?
  • See how the data is trending over time?
  • Compare the parts of a whole or the sum of two or more of the same data parts for different groups?

Once we have considered both our data type and our message, we can confidently select the right chart design for the job.

Distribution

In the following example, we need to clearly show the age distribution of a group of patients. If we use a horizontal stacked bar chart, it will be close to impossible to quickly and easily compare age groups and determine if they are distributed normally, or if they skew towards younger or older ages. Compounding this problem is the use of color, such as the shades of blue and grey, which are very similar, but show different percentages. For example, the age group 10-19 years (21%) is displayed in grey, as are the 60-69 age group (4%) and the 80+ cluster (1%).

raising-the-bar-01

The appropriate way to display the age distribution of the population of interest is with a histogram like the one below.

raising-the-bar-02

Displaying the data like this makes it easy for the viewer to directly compare the values in the different age categories by looking at the height of the bars, and to understand if the patients are skewing younger (as in the display above) or older.

Stated another way, a histogram is perfectly designed to enable us to compare the size of the bars and see the shape and direction of the data.

Trends Over Time

Another mistake I see on a regular basis is the use of a stacked bar chart to display trends over time for different parts of a whole or a category of data, as in the chart below.

raising-the-bar-03

Unfortunately, this approach permits accurate viewing and interpretation only at the very bottom or first part of the stacked bar (starting at 0). A viewer cannot in fact accurately or easily see how categories change over time, because each part of the bar begins and ends at a different place on the scale.

In order to correctly interpret what she sees in such a design, the user must do a mental calculation (a sort of math gymnastics) involving the beginning and end points of each section of the bar, for each time-frame.

She must then hold those pieces of data in memory, while simultaneously trying to understand how the data has changed through time, and attempting to compare it to the same information for all the other sections of the bar. (Merely describing this onerous process makes me tired.) The best way to show trends over time is with a line graph like the one below.

raising-the-bar-04

Such a graph allows the viewer to see whether something is increasing or decreasing, improving or getting worse; and how it compares to other parts of the whole. I have been challenged a few times by folks who believe that the stacked bar chart is better suited to showing that the displayed data is part of a whole; however, one can highlight that aspect easily by labeling the chart and lines clearly, as in the example above.

Comparing Parts of a Whole and Sums of Parts

At this point, you may be asking, “OK, when is it appropriate to use a stacked bar chart?”

Well, let me tell you. Whenever you need to show two – and two only – parts of a whole, a stacked bar chart does the trick quite nicely, and can also be a space-saver if you have limited real estate on a dashboard or report. The display works well precisely because the viewer doesn’t have to do the math gymnastics described above: the two parts can easily be seen and compared.

Depending on the layout of a report, you can play around with vertical or horizontal bars as in the two different displays below to determine what will work best for your specific report or dashboard. I often prefer to use horizontal bars, because they allow me to place my labels once and add additional information in alignment with the bars (such as figures or line graphs) to show trends over time.

raising-the-bar-05

raising-the-bar-06

Trying to compare the sum of parts using two stacked bars, however, generates yet another problem. As I have said, it is very difficult to understand how big each part of the bars is, never mind comparing one bar to the other in some meaningful way. And the piling on of different colors, as in the graph below, is just distracting: it requires looking back and forth between two as we try to hold colors and numbers in our short-term memory – a task none of us is very good at. More likely than not we give up this cumbersome task, and the message is lost.

raising-the-bar-07

There is all the same a way to compare the SUM of the same parts using a stacked bar chart. In the following graph example (which I found on Steve Few’s site), I can compare different clinics’ payor mix (for the same payors) with a stacked bar chart like this:

raising-the-bar-08

It is important to note in this example that the parts are arranged in the same order on both bar charts, and the two payor groups to be compared are at the very bottom of the charts. This design permits an effortless grasp of begin and end points. And here the use of color separates those parts from the others, drawing the viewer’s attention to the comparison to be understood.

As with all data visualization, the goal is to create charts and graphs that help people see the story in mountains of data without doing math gymnastics, color-matching, or anything else that strains not-always-reliable (and always over-taxed) short-term memory and pre-attentive processing.

Bottom line? Stop and think about the type of data you need to communicate, what you want your viewers to consider, and the best data visualization to accomplish these tasks.

Share
Posted in Communicating Data to the Public, Data Visualization, Design Basics, Graphs, Newsletters | Leave a comment

My Secret Tip for Testing Data Visualizations

This past Sunday my husband, Bret, our pup, Juno, and I headed out to Deer Island in Massachusetts Bay. We love this walk because of the fantastic views it affords of the Bay and of Boston, and because the island’s history is always a fun and fascinating topic of conversation.

For example, on this excursion, Bret and I talked about Trapped Under the Sea, Neil Swidey’s riveting book about the nearly-10-mile-long Deer Island Tunnel, built hundreds of feet below the ocean floor in Massachusetts Bay. It helped transform the Harbor from the dirtiest in the country to the cleanest – and its construction led to the tragic (and completely avoidable) deaths of five men.

As we rounded the southwest corner of the island, Boston revealed itself to us, and we stopped to see how many landmarks we could identify, along with an interesting fact about each to liven things up (yes, we do try to one-up each other).

We’ve done this numerous times over the years, but on this occasion, the exercise started me thinking about what I was seeing in an entirely new and different way. I began on my left looking at Fort Independence, then moved my eyes to the right to see the Prudential and John Hancock buildings, then the Bunker Hill Monument, followed by the Zakim Bunker Hill Memorial Bridge and the Logan Airport Control Tower.

That’s when it hit me: I was creating sentences and weaving them into a narrative about my beloved city using visual landmarks as cues, just as I do with my healthcare data visualizations.

boston-landmarks

I’ve developed a habit when I’m designing or testing reports and dashboards: I imagine that I’m in front of the individual or group they’re intended for. Speaking aloud (yes, I do talk to myself on a regular basis), I practice to test whether, using the figures and graphs as my guide, I can create a cohesive, fluent, and compelling narrative.

The reason I do this (and that I encourage you to do it, too) is that I’ve learned that if I can tell a guided story about the data and information on the reports and dashboards I’m designing, then the people in my intended audience will be able to as well. Conversely, if I find myself struggling and stumbling, then I know I need to go back to the drawing board and either refine what I’ve created or, yes, ditch it and start over.

Consider the following prototype CEO Monitoring Dashboard that my team and I at HealthDataViz (HDV) created using fabricated data. I’ve added a few examples of the sentences and narrative I wrote as we were developing and testing it.

(click to enlarge)

I always begin my descriptions with an introduction or executive summary about the level of data being displayed (Summary Overview vs. Subject-Area Specific, for example); the intended audience; and overall objectives and end use.

Next, I carefully survey the data being displayed, moving primarily from left to right and top to bottom – or, depending on the layout of the dashboard and leveraging the way that our eyes cover a page, beginning at the top left, moving to the right and then down the right-hand column and back up along the left-hand one.

Perhaps most important, I include very specific examples supported by data points. Selecting just the right ones for my review may be the hardest and most time-consuming part of this self-check I do, but it is absolutely essential for testing that what I have displayed is correct and makes sense – and that I can explain it in simple, brief terms.

Here is an abbreviated example of what I mean (pretend you’re in the room listening while I practice):

Summary Overview

  • This Hospital CEO Dashboard takes into account the current environment in which hospital CEO’s have to navigate – one shaped by Value Based Purchasing (VBP) and public reporting, and where financial, clinical, information technology, and patient satisfaction results are all inextricably linked.

Top Left – One-Month Results and Summary Performance

  • On the upper left side of the dashboard, we can see that the Actual Average Daily Census for December was 4% below Budget (254 versus 264); and that as shown in the trend graph, this performance is reflective of the past twelve months’ performance, culminating in a YTD below-budget result of 8%.

Top Right – Payor Mix

  • It is also interesting to note changes to the hospital’s year to date (YTD) payor mix displayed in the bar graph at the top right of the dashboard. For example, in the current year, Commercial Insurance represents approximately 50% of all hospital payors as compared to 40% in the previous year.

Middle Right – Quality and Patient Satisfaction

  • On the HCAHPS survey question “Would recommend this hospital,” approximately 80% of the patients responding for this specific hospital said “yes” as displayed by the horizontal blue bar. This result misses the hospital’s target of 90% (represented by the vertical black line), and places the hospital in the 75th percentile nation-wide, as signified by the underlying horizontal stacked bar in shades of grey (no, not the movie, people – the bar chart!).

Bottom Right – EHR Compliance

  • In this display, we can see that Medicine and Pharmacy are performing better than their target levels at 100% compliance, and that Pathology and Urology have the worst compliance rates, at only 60% each.

Bottom Left – Hospital Specific Key Metrics

  • Two specific metrics that the CEO wants to monitor are the hospital’s 30-day readmission rates, and Supply Expenses as a percentage of Net Operating Expenses compared to target.

Middle Left – Mortality O/E Ratio

  • This display reveals that for the last three months displayed, the O/E ratios are statistically unusually high (more deaths recorded than we would have expected, and the confidence interval does not include one). In October, the ratio was approximately 1.5; in November 1.8; and by December, it had climbed to 2.0. We have also coded these statistically significant O/E ratios in red to draw attention to them.

I cannot encourage you enough to start using this review-and-read-aloud technique to challenge yourself and clarify whether you have created a dashboard that makes sense and provides insights for your audiences that will lead them to take prompt, effective action. It is a simple, fast, and inexpensive way to get the answers you need for yourself and your own confidence and serenity.

The process may not always be easy: when you have to really, truly describe what you have created in a clear and compelling manner, using detailed explanations with examples from the data, I’ll bet you’ll find it challenging – perhaps even rather frustrating – the first few times you try. But keep at it: in the long run, you will discover that it helps you to create much better and more comprehensive reports and dashboards.

And if you ever need a break to clear your head, I have the perfect walk in mind to do so.

Share
Posted in Best Practices, Communicating Data to the Public, Dashboards | 1 Comment