Visualizing Data

Just what the world needs — another blog.

Well, when it comes to the sharing the best practices for displaying healthcare data visually and finding and telling the story buried in your data that is EXACTLY what the world needs — a blog that delivers the information and help you've just got to have, but don't have easy access to.

And as much as I love the sound of my own voice (and I do, ask anyone) I encourage you to contribute your thoughts, questions and examples (HIPAA compliant please — I don't look good in stripes).

Let the blogging begin.

“Time is [Not] on My Side”: Using Time Efficiently When Developing Dashboards

In 1657, the French mathematician and philosopher Blaise Pascal apologized for a very lengthy letter he had written thus: “This letter is so long only because I had no time to make it shorter.” Over 350 years later, his words still resonate with me. I imagine that they do with you, too.

It’s hard work to communicate one’s thoughts and ideas briefly yet completely in writing. We need time – time to think, to try out different words and phrases, to solicit feedback, to edit (and edit some more). The same is true about the process of building overview monitoring dashboards to display healthcare data in a clear and compelling way: we need time – to grasp the underlying data and compose meaningful summaries; time to discover the best medium for and arrangement of the data, time to solicit feedback, to edit and refine the whole.

Sadly, we each only get 24 hours a day; I can’t help you much there. But I can share with you the approach I use, one I believe will help you use your time more efficiently as you develop really great overview dashboards.

The most important thing to do is step away from your computer and acquaint (or re-acquaint) yourself with both your colleagues and your whiteboard and markers. This idea doubtless sounds antiquated, but acting on it will make a huge difference: it frees you from worrying about or being distracted by the values in the data, and permits you to think contextually about the dashboard you need to create. Liberating your eyes from the computer screen lets you exchange ideas with your colleagues; using a whiteboard means that if something isn’t working, you can erase it and start over – no harm, no foul.

Think of all that this freeing step allows you to consider:

  • The scope and role of the people using the overview dashboard, and what decisions they need to make. Are they responsible for many facilities and departments? If yes, then they need an overview summary dashboard that lets them monitor several locations on a single page (read more about this here).
  • The data categories you have to work with. I can’t emphasize enough that you must NEVER skip this step: it is essential to being able to summarize and organize the data on an overview dashboard in a correct and meaningful way based on your viewer’s scope, role, and decisions.
  • The context: “Compared with what?” Everything in data analysis and display is anchored in comparison. Do you have budget data, targets, previous results and/or group comparisons? If you can’t answer the question “compared with what?” your viewers will invariably end up saying “so what?”

Here’s a brief example of what I mean:

Imagine we’ve been asked to create an overview dashboard for the senior Quality Director for a multi-system organization. Asking the questions suggested above, we discover that

  • her scope and role encompass numerous facilities and multiple performance measures. The high-level decisions she must make include identifying groups that may need help improving their performance on quality measures, and determining if there are measures that all groups need help on.
  • the data categories she works with include the institutions delivering care and the quality metrics required by regulatory groups.
  • the context of her decisions includes monitoring groups and measuring performance in comparison with each other. She has set performance targets for each group and each measure.

From a first review of the data, you learn that there is historical and current information available that may be categorized by facilities and measures.

Armed with the crucial gleanings summarized above from careful research and review, you can create an overview dashboard that will allow the director to consider the data in two useful, revelatory ways:

  • Facilities: Anchoring the first view of the data to each facility allows the director to rate each facility’s performance, identifying those doing well, and those that need to improve. This view helps the director consider “whom” she may need to focus on.
  • Measures: The bottom half of this example is organized by each measure, and allows the director to discern specific measure(s) that some or all facilities need to improve upon. This view enables a high-level identification of “what” should be the focus of improvement efforts.


In this very simple example, I have created a summary view that lays out for the director, at a glance, the best- and worst-performing facilities, and makes it easy to quickly spot which facilities need her focused and immediate attention. My second display highlights specific measures needing improvement at multiple facilities.

Of course, supporting zoom reports are required to understand the underlying details, but at a very high level, this approach helps point the director in the right direction as she monitors results over time.

I wish the refrain from the song “Time is on My Side” (Mick Jagger and the Rolling Stones) were true in this case; alas, “It’s only Rock ‘n’ Roll.” (But I still like it.)

Posted in Best Practices, Dashboards, Newsletters | Leave a comment

The Hammock Chronicles (2016 Edition)

Long-time subscribers to this newsletter know that every August, my family and I retreat to Bustins Island in Casco Bay, Maine, where I commandeer the hammock and contemplate life (along with the inside of my eyelids). (Check out previous posts from my island hammock here.)

Very little has changed about Bustins Island since I first starting coming to it 31 years ago with my husband, Bret. One thing that has, however, is that after all those summers I know the Island drill, as does my family, and we have a highly streamlined system for packing, travel, and enjoying our stay. We know exactly what to take for a fab vacation: good food and wine, bathing suits, some great books. And that’s it.

Musing on the accomplishment of having exactly what we need on the Island got me thinking (from the hammock, of course) about dashboard and report design, and the experience and discipline it takes to display exactly the right information needed for informed decisions – nothing more nor less.

Finding this balance can take a little time, and is often the result of trial and error; but once you commit to a method and approach, you will find your work much easier. The following “Three Commandments” will remind you to try techniques I have found extraordinarily helpful.

Document and Diagram. Take time to understand your clients’ mental models of the way they use data in their work. A diagram like the one below will guide you and them through the documentation of precisely what information should display on an Overview Summary Dashboard (for monitoring performance); Zoom Reports (for more in-depth analysis on a specific dimension); and Details Lists.


(click to expand)

Talk it Through. If you get stuck (and you will!), ask your clients to describe specific situations in which they used data that you’d displayed for them; or to recall scenarios when they would have found additional information helpful. Often this clarifies for both of you how and at what stage in the thought process information will be most supportive, and the precise spot it will be displayed most effectively. Sometimes after this process, a metric may even be judged of no value, and can be deleted to free up precious space.

Build Simple Prototypes; Be Engaged. Okay, this sounds obvious – but you must engage your clients, honestly and clearly discussing what will be created and using basic prototypes to guide you through several rounds of brainstorming before everything starts to crystalize. Remember though that there is a fine line between “tossing around ideas” and “over-thinking”: you also need to know when to stop prototyping, and build and deploy. The perfect is the enemy of the possible.

It took our family years to figure out exactly what we needed on Bustins Island for a great vacation – and of course that “exactly” has changed over time as our children have grown and our interests evolved.

But once we thought deeply about what was important to us, and what material objects would only enhance enjoyment of that, we were able to enjoy our vacation to the fullest – supported by just the right type and amount of stuff.

The absolute same thing is true about reports and dashboards. Once you determine what your clients need and hope to accomplish, you can collaborate with them to choose just the right supporting data and information, and the most effective way to display those – nothing less, nothing more.

Posted in Best Practices, Communicating Data to the Public, Dashboards, Newsletters | Leave a comment

2016 Summer Reading List for the Healthcare Data Geek

I don’t know about you, but I was shocked when I glanced at the calendar and realized that it was time for my annual summer reading list for healthcare data geeks! HULLO! Wasn’t it October just last week?

Anyway, without any further delay, here are my recommendations for you to consider, take to the beach, or peruse while hanging in a hammock (my favorite summer pastime, as many of you know).

The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t


It only seems fitting that I’m writing about this book while the movie Money Ball is playing on the television in the background. That’s because Nate Silver, the author of The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t, first gained public notice when he devised a new and innovative method for predicting the performance of baseball players, which by the way, he later sold for a tidy sum to the Web site, Baseball Prospectus.

No slug, Silver then went on the win big at the poker tables and to reinvent the field of political predictions (check out his blog Five Thirty-Eight, named as a nod to the number of votes in the Electoral College).

Silver doesn’t spend much time explaining to the reader how he develops his prediction algorithms. Rather, he evaluates predictions in a variety of fields, including epidemiology, weather and finance. For example, he explains the reason why we are not even close to predicting the next killer bird flu epidemic or the next ruinous earthquake.

If you are hoping to find a book that has riveting tales about success in the face of long odds, this isn’t it. But if you are interested in gaining great insights into the pitfalls of the increasingly over-used term, “predictive analytics,” this book is for you.

When Breath Becomes Air


Although this book may seem out of place on a reading list for healthcare data geeks, I would argue it is absolutely appropriate and one that you should give serious consideration to reading.

When Dr. Paul Kalanithi sent his best friend an email in May 2013 disclosing that he had terminal cancer, he wrote: “The good news is that I’ve already outlived two Brontës, Keats and Stephen Crane. The bad news is that I haven’t written anything.” But that changed in a powerful and beautiful way – with only 22 months left to live, Dr. Kalanithi, who died at age 37, overcame pain, fear and exhaustion to write this profoundly insightful and moving book, When Breath Becomes Air.

On an emotional level, this book is unbearably tragic. But it is also an oft-needed reminder that all of healthcare is and should be about the people we serve with our work, whether directly like Dr. Kalanithi or indirectly in supporting roles. As I read about Kalanthi’s struggle to understand who he became once he could no longer perform neurosurgery, and what he wanted from his remaining time, my heart broke. And when I read about a terrible period of time when his oncologist was away and he was treated by an inept medical resident who nearly hastened his death by denying him one of the drugs he desperately needed, my anger left me fuming at the unconscionable gaps in our healthcare systems.

Here’s the bottom line – I am including this book on my list this year because it is beautifully written and because it is a sober reminder about the importance of our collective efforts to deliver high quality, compassionate care to all who seek medical attention. We must continue the fight and Dr. Kalanthi’s story serves to remind us exactly why.

Nudge: Improving Decisions About Health, Wealth, and Happiness

Nudge – to give (someone) a gentle reminder or encouragement. – Webster’s Dictionary

As I was packing away some of the books on my office shelves (to make room for new books), I couldn’t help but stop and flip through Nudge one more time.

Few people will be surprised to learn that the setting in which individuals make decisions often influences the choices they make.

How much we eat depends on what’s served on our plate. The items we pick from the cafeteria line correspond with whether the salads or the desserts are placed at eye level (although I can argue that no matter where dessert is, I will find it). The magazines we buy depend on which ones are on display at the supermarket checkout line.

But the same tendency also affects decisions with more significant consequences: How much families save and how they invest; what kind of mortgage they take out; which medical insurance they choose; which cars they drive.

Behavioral economics, a new area of research combining economics and psychology, has repeatedly documented how our apparently free choices are affected by the way options are presented to us.

This practice of structuring choices is called “choice architecture” and Richard Thaler and Cass Sunstein’s book is an insightful journey through the emerging evidence about how decisions are made and how individuals and policy makers can make better ones.

Thaler and Sunstein apply the principles of choice architecture to a few problems in healthcare:

  • How could Medicare part D be improved?
  • How can organ donation rates be increased?
  • Why shouldn’t patients be allowed to waive their right to sue for medical negligence in return for cheaper health care?

But the concepts in the book go well beyond their specific examples and could prove very useful to practicing clinicians who, the authors note, are often in the position of being choice architects for their patients.

Although there is still a lot of work to be accomplished (a lot), some of the principles of choice architecture are beginning to find their way into projects to promote better care, ensure better health outcomes and lower costs. These include, but are not limited to:

  • Alignment of incentives with desired and measurable outcomes (e.g., improved provider reimbursement for the active and measurable care coordination of diabetic patients).
  • Default care options that support better health practices (e.g., childhood immunizations).
  • Communication about care and treatment choices and their associated outcomes in patient-friendly formats (e.g., structure and well supported, informed, decision making programs).
  • Systems that expect and therefore are designed to prevent, detect and minimize errors and improve patient compliance (e.g., pill cases and inhalers with dosage counters, alerts and reminders).

Nudge still holds up on a second reading – the examples are interesting and fun. More important, the information in this book absolutely has the potential to change the way you think about healthcare systems and the delivery of patient care.

And there you have it – my 2016 summer reading list for geeks.

But before I sign-off, I want thank
all of my dedicated newsletter subscribers; the purchasers of The Best Boring Book Ever series; and the greatest clients one could have for the chance to work together as we endeavor to “show and see” new opportunities to improve healthcare. We look forward to visualizing even more healthcare with you.

Happy reading!

Posted in Books, Newsletters | Leave a comment

Raising the Bar on Stacked Bar Charts

Unfortunately, and more often than is good for my mental health, I encounter data being ineffectively displayed in stacked bar charts. Which phenomenon leads to my “Question of the Day”: When should we present our healthcare data in a stacked bar chart versus some other display form? (A quick thanks, before I forget, to data-viz expert Steve Few for his recent insightful post on this subject.)

As with all charts, we need to think first about the different types and characteristics of the data we are working with. (Are we seeing a time series? An interval? Nominal?) What do we need to tell our viewers? Do we need them to

  • Understand the distribution of the data and whether or not it is skewed?
  • See how the data is trending over time?
  • Compare the parts of a whole or the sum of two or more of the same data parts for different groups?

Once we have considered both our data type and our message, we can confidently select the right chart design for the job.


In the following example, we need to clearly show the age distribution of a group of patients. If we use a horizontal stacked bar chart, it will be close to impossible to quickly and easily compare age groups and determine if they are distributed normally, or if they skew towards younger or older ages. Compounding this problem is the use of color, such as the shades of blue and grey, which are very similar, but show different percentages. For example, the age group 10-19 years (21%) is displayed in grey, as are the 60-69 age group (4%) and the 80+ cluster (1%).


The appropriate way to display the age distribution of the population of interest is with a histogram like the one below.


Displaying the data like this makes it easy for the viewer to directly compare the values in the different age categories by looking at the height of the bars, and to understand if the patients are skewing younger (as in the display above) or older.

Stated another way, a histogram is perfectly designed to enable us to compare the size of the bars and see the shape and direction of the data.

Trends Over Time

Another mistake I see on a regular basis is the use of a stacked bar chart to display trends over time for different parts of a whole or a category of data, as in the chart below.


Unfortunately, this approach permits accurate viewing and interpretation only at the very bottom or first part of the stacked bar (starting at 0). A viewer cannot in fact accurately or easily see how categories change over time, because each part of the bar begins and ends at a different place on the scale.

In order to correctly interpret what she sees in such a design, the user must do a mental calculation (a sort of math gymnastics) involving the beginning and end points of each section of the bar, for each time-frame.

She must then hold those pieces of data in memory, while simultaneously trying to understand how the data has changed through time, and attempting to compare it to the same information for all the other sections of the bar. (Merely describing this onerous process makes me tired.) The best way to show trends over time is with a line graph like the one below.


Such a graph allows the viewer to see whether something is increasing or decreasing, improving or getting worse; and how it compares to other parts of the whole. I have been challenged a few times by folks who believe that the stacked bar chart is better suited to showing that the displayed data is part of a whole; however, one can highlight that aspect easily by labeling the chart and lines clearly, as in the example above.

Comparing Parts of a Whole and Sums of Parts

At this point, you may be asking, “OK, when is it appropriate to use a stacked bar chart?”

Well, let me tell you. Whenever you need to show two – and two only – parts of a whole, a stacked bar chart does the trick quite nicely, and can also be a space-saver if you have limited real estate on a dashboard or report. The display works well precisely because the viewer doesn’t have to do the math gymnastics described above: the two parts can easily be seen and compared.

Depending on the layout of a report, you can play around with vertical or horizontal bars as in the two different displays below to determine what will work best for your specific report or dashboard. I often prefer to use horizontal bars, because they allow me to place my labels once and add additional information in alignment with the bars (such as figures or line graphs) to show trends over time.



Trying to compare the sum of parts using two stacked bars, however, generates yet another problem. As I have said, it is very difficult to understand how big each part of the bars is, never mind comparing one bar to the other in some meaningful way. And the piling on of different colors, as in the graph below, is just distracting: it requires looking back and forth between two as we try to hold colors and numbers in our short-term memory – a task none of us is very good at. More likely than not we give up this cumbersome task, and the message is lost.


There is all the same a way to compare the SUM of the same parts using a stacked bar chart. In the following graph example (which I found on Steve Few’s site), I can compare different clinics’ payor mix (for the same payors) with a stacked bar chart like this:


It is important to note in this example that the parts are arranged in the same order on both bar charts, and the two payor groups to be compared are at the very bottom of the charts. This design permits an effortless grasp of begin and end points. And here the use of color separates those parts from the others, drawing the viewer’s attention to the comparison to be understood.

As with all data visualization, the goal is to create charts and graphs that help people see the story in mountains of data without doing math gymnastics, color-matching, or anything else that strains not-always-reliable (and always over-taxed) short-term memory and pre-attentive processing.

Bottom line? Stop and think about the type of data you need to communicate, what you want your viewers to consider, and the best data visualization to accomplish these tasks.

Posted in Communicating Data to the Public, Data Visualization, Design Basics, Graphs, Newsletters | Leave a comment

My Secret Tip for Testing Data Visualizations

This past Sunday my husband, Bret, our pup, Juno, and I headed out to Deer Island in Massachusetts Bay. We love this walk because of the fantastic views it affords of the Bay and of Boston, and because the island’s history is always a fun and fascinating topic of conversation.

For example, on this excursion, Bret and I talked about Trapped Under the Sea, Neil Swidey’s riveting book about the nearly-10-mile-long Deer Island Tunnel, built hundreds of feet below the ocean floor in Massachusetts Bay. It helped transform the Harbor from the dirtiest in the country to the cleanest – and its construction led to the tragic (and completely avoidable) deaths of five men.

As we rounded the southwest corner of the island, Boston revealed itself to us, and we stopped to see how many landmarks we could identify, along with an interesting fact about each to liven things up (yes, we do try to one-up each other).

We’ve done this numerous times over the years, but on this occasion, the exercise started me thinking about what I was seeing in an entirely new and different way. I began on my left looking at Fort Independence, then moved my eyes to the right to see the Prudential and John Hancock buildings, then the Bunker Hill Monument, followed by the Zakim Bunker Hill Memorial Bridge and the Logan Airport Control Tower.

That’s when it hit me: I was creating sentences and weaving them into a narrative about my beloved city using visual landmarks as cues, just as I do with my healthcare data visualizations.


I’ve developed a habit when I’m designing or testing reports and dashboards: I imagine that I’m in front of the individual or group they’re intended for. Speaking aloud (yes, I do talk to myself on a regular basis), I practice to test whether, using the figures and graphs as my guide, I can create a cohesive, fluent, and compelling narrative.

The reason I do this (and that I encourage you to do it, too) is that I’ve learned that if I can tell a guided story about the data and information on the reports and dashboards I’m designing, then the people in my intended audience will be able to as well. Conversely, if I find myself struggling and stumbling, then I know I need to go back to the drawing board and either refine what I’ve created or, yes, ditch it and start over.

Consider the following prototype CEO Monitoring Dashboard that my team and I at HealthDataViz (HDV) created using fabricated data. I’ve added a few examples of the sentences and narrative I wrote as we were developing and testing it.

(click to enlarge)

I always begin my descriptions with an introduction or executive summary about the level of data being displayed (Summary Overview vs. Subject-Area Specific, for example); the intended audience; and overall objectives and end use.

Next, I carefully survey the data being displayed, moving primarily from left to right and top to bottom – or, depending on the layout of the dashboard and leveraging the way that our eyes cover a page, beginning at the top left, moving to the right and then down the right-hand column and back up along the left-hand one.

Perhaps most important, I include very specific examples supported by data points. Selecting just the right ones for my review may be the hardest and most time-consuming part of this self-check I do, but it is absolutely essential for testing that what I have displayed is correct and makes sense – and that I can explain it in simple, brief terms.

Here is an abbreviated example of what I mean (pretend you’re in the room listening while I practice):

Summary Overview

  • This Hospital CEO Dashboard takes into account the current environment in which hospital CEO’s have to navigate – one shaped by Value Based Purchasing (VBP) and public reporting, and where financial, clinical, information technology, and patient satisfaction results are all inextricably linked.

Top Left – One-Month Results and Summary Performance

  • On the upper left side of the dashboard, we can see that the Actual Average Daily Census for December was 4% below Budget (254 versus 264); and that as shown in the trend graph, this performance is reflective of the past twelve months’ performance, culminating in a YTD below-budget result of 8%.

Top Right – Payor Mix

  • It is also interesting to note changes to the hospital’s year to date (YTD) payor mix displayed in the bar graph at the top right of the dashboard. For example, in the current year, Commercial Insurance represents approximately 50% of all hospital payors as compared to 40% in the previous year.

Middle Right – Quality and Patient Satisfaction

  • On the HCAHPS survey question “Would recommend this hospital,” approximately 80% of the patients responding for this specific hospital said “yes” as displayed by the horizontal blue bar. This result misses the hospital’s target of 90% (represented by the vertical black line), and places the hospital in the 75th percentile nation-wide, as signified by the underlying horizontal stacked bar in shades of grey (no, not the movie, people – the bar chart!).

Bottom Right – EHR Compliance

  • In this display, we can see that Medicine and Pharmacy are performing better than their target levels at 100% compliance, and that Pathology and Urology have the worst compliance rates, at only 60% each.

Bottom Left – Hospital Specific Key Metrics

  • Two specific metrics that the CEO wants to monitor are the hospital’s 30-day readmission rates, and Supply Expenses as a percentage of Net Operating Expenses compared to target.

Middle Left – Mortality O/E Ratio

  • This display reveals that for the last three months displayed, the O/E ratios are statistically unusually high (more deaths recorded than we would have expected, and the confidence interval does not include one). In October, the ratio was approximately 1.5; in November 1.8; and by December, it had climbed to 2.0. We have also coded these statistically significant O/E ratios in red to draw attention to them.

I cannot encourage you enough to start using this review-and-read-aloud technique to challenge yourself and clarify whether you have created a dashboard that makes sense and provides insights for your audiences that will lead them to take prompt, effective action. It is a simple, fast, and inexpensive way to get the answers you need for yourself and your own confidence and serenity.

The process may not always be easy: when you have to really, truly describe what you have created in a clear and compelling manner, using detailed explanations with examples from the data, I’ll bet you’ll find it challenging – perhaps even rather frustrating – the first few times you try. But keep at it: in the long run, you will discover that it helps you to create much better and more comprehensive reports and dashboards.

And if you ever need a break to clear your head, I have the perfect walk in mind to do so.

Posted in Best Practices, Communicating Data to the Public, Dashboards | 1 Comment

David Bowie vs. Alan Gilbert

alan-gilbert-conductingI recently watched Charlie Rose interview New York Philharmonic conductor Alan Gilbert (pictured at right).

During the interview, Gilbert described how he slowly and in small increments moved the Philharmonic musicians (and by extension their audiences) from a traditional view to a thoroughly new one of how concerts may be performed.

First, Gilbert had the musicians crumple up pieces of paper and throw them at him at the end of a concert. (I know: radical stuff! Call the concert police!)

Over the next ten (yes, ten) years, he made other small changes in the performances, and along the way earned enough trust from both musicians and audiences to permit occasional rowdy behavior, as when the musicians don wild, themed costumes and move freely around the stage while playing.

It was interesting to hear about these amusing wrinkles in standard concert behavior; but what struck me most vividly was Gilbert’s willingness to “meet [the musicians] where they were,” and the patience and skill it took to make that journey.

David-BowieBy contrast, we – his fans – had to meet singer, composer, musician, artist, and creative spirit David Bowie, who died earlier this year just days after releasing yet another ground-breaking, game-changing album, where he was.

Often described as a visionary chameleon, Bowie hit the ’70’s music scene with a look, sound, and attitude that caught most people by surprise. His physical appearance alone made the Beatles and the Stones look like schoolboys in short pants.

He didn’t just bend conventional ideas on gender, art, music, performance, costume, and genre; he took them on a Turkish Taffy roller coaster ride inside a house of mirrors. Unlike Alan Gilbert, he showed up living and performing his vision, and invited us to come along-or not. We had to meet Bowie where he was, not the other way around.

I would love to conduct my career in the spirit of David Bowie, but I’m a pragmatist (with a mortgage) working in an industry that moves at its own pace. As a result, I’m continually learning new ways to meet people where they are. I imagine most of you are, too.

In that spirit, I offer here three approaches I’ve found helpful when I work with groups that need me to meet them where they are:

  • Use Small Demonstrations of Data Visualization Best Practices.

    Identify existing reports and dashboards that may be improved by replacing a poor display device (like a pie chart) with a bar chart; or changing a three- (or more) part stacked bar chart to a small multiple display.

    By making such modest changes, you can begin to move people toward using and understanding the best practices of data visualization at a pace they find comfortable and easy to accept.

  • Focus on Areas with the Most Evident ROI.

    Identify spots where data is urgently and immediately needed to prevent the imposition of a penalty or to mitigate risk, and is not being well reported (i.e., data reports are onerous and hard to use).

    One example might be performance metrics tied to CMS Annual Payment Updates, or data analysis to identify and manage patient risk factors for outcomes such as morbidity or mortality. Such weak places are often great starting targets, because the return on investment can be easily quantified.

    People are under pressure to actively manage information and avoid ineffective or confusing results, and therefore more open to new ways of seeing, thinking, and analyzing.

  • Look for Blank Canvases.

    Seek the areas and groups where no dashboards or reports exist. If you’re lucky enough to find such a situation, you’ll get the chance to use the best practices of data visualization from the start, and to capture people’s support for those best practices right away.

Listening to Alan Gilbert describe small but powerful steps toward realizing his vision for the Philharmonic was good for me: it helped me remember that although I do believe in change, I sometimes need to retreat – as most of us do – to the reassuring fallback position “Change is good. You go first!” all while humming to Bowie playing softly in the background: “Ch-ch-ch-ch-Changes, turn and face the strain, Ch-ch-Changes…”

Posted in Best Practices, Know Your Audience, Newsletters | Leave a comment

And Around We Go… Again

As I mature (and boy, is aging a high price to pay for maturity), I find I have very little need or even desire to win an (never mind every) argument, or to prove that I’m right about something.

I suppose that’s true in part because I understand that we all see the world in different ways, and in part because it seems to take a very long time for even solid, compelling evidence about anything to persuade people to change their firmly held beliefs. (And I admit that sometimes I count myself among those folks.)

It’s also why I’ve written very rarely (even though I’m occasionally tempted to say something) on “why I’m a card-carrying member of the ‘Better not use pie charts’ club.”

There are many expert voices, and there is plenty of evidence, on this topic.

The data-visualization pioneer Edward Tufte said that “pie charts should never be used”; William Cleveland referred to pie charts as “pop charts” because they are commonly found in pop culture media rather than in science and technology writing. Data-visualization expert Stephen Few wrote the widely-read and frequently-referenced essay “Save the Pies for Dessert.” All the same, I feel the need to add my voice to the chorus in the hope of improving healthcare data visualizations.

What pushed me over the edge?

A free e-book from a software vendor (that should have been my first clue) which, in spite of well-established expert opinions and evidence about why pie charts are not as effective as other display devices, presents advice about the misuse of pie charts – that is, it explains how to use pie charts correctly. And around we go again – oh, my aching head!

Let’s walk through what is suggested and why those suggestions constitute bad advice; and then let’s turn to the part left out: how to display data better with nary a pie chart in sight.

Here are some excerpts (I’m paraphrasing):

Example 1

The book says…

“Don’t squeeze too much information into a pie chart: the slivers get too thin, and the audience confused.”

I say…

Use a bar chart like the one in Example 1, below. We humans find it very difficult to judge the size of the angle in a pie chart. With a bar chart, we can immediately tell the size of the data being encoded by the length of each bar. It’s then easy to directly compare the lengths of the bars, and determine which values are larger or smaller.

We can also add a comparison or target line if we need to, which we can’t do on a pie chart.

We can label each value being displayed directly rather than making our viewers match a color-coded key to each slice of the pie, all while trying to hold the information in short-term memory as they look back and forth from the chart to the key. (Try it, and you’ll see what I mean!)



(click to enlarge)

Example 2

The book says…

“Order your slices from largest to smallest for easiest comparison.”

I say…

Okay, this is just silly!! Simply use a ranked bar chart like the one in the Example 2, below.



(click to enlarge)

Example 3

The book says…

“Avoid using pie charts side by side – it’s an awkward way to compare data.”

I say…

Yep, you guessed it: use bar charts. And if you need to encode additional comparison data, try a bullet graph (a modified form of the bar chart). In addition to being a better way to display data, a bar chart allows additional context for visualizations.

In the example below, by using a bar chart and leveraging the fact that my viewers read from left to right, I label the data once and accomplish all of the following. I can

  • show the number of cases eligible for the measure (the denominator);
  • display compliance compared to target;
  • note the difference between the current quarter performance and the target; and
  • record how each clinician has performed over the last four quarters.

You simply can’t do all this – quickly, clearly, and in a modest display space – with a pie chart. Look at the results in Example 3, below.



(click to enlarge)

Here’s the bottom line – pretty much anything you can do with a pie chart, you can do better with a bar chart. This is especially true for the types of displays we create in healthcare.

Bar charts make it easy to:

  • directly compare the sizes of data groups displayed.
  • directly label the data.
  • easily rank the data.
  • include comparison or target data.
  • include additional contextual data.

As is clear from this last example above, bar charts are also far superior when used on a dashboard. They take up less space than pie charts and (as previously noted), make it possible to display much additional contextual data, such as performance over time.

Every so often I come across a forum where people still rant on about how maligned pie charts are. I admit I find them – both the people and the pie charts – infuriatingly amusing. Yes, the charts can be fun on an infographic, or useful for teaching young children the concept of part-to-whole, but for me and the work I do the evidence is in – forever – and pie charts are out.

Posted in Design Basics, Graphs, Newsletters | 1 Comment

Best Available Incomplete Information (BAII)

When I was a teenager, I had one terrible habit that drove my mother over-the-edge crazy. (OK: I had more than one. But hey, “driving your mother crazy” is part of the official job description for “teen-age girl.” I looked it up.)

My particular expertise was in the fine craft of strategically omitting information that would’ve assuredly had a negative impact on my desired outcome.

For example, I would ask if I could go to my best friend Tracy’s house for the night, but I would leave out the fact that we would be stopping by bad-boy Tom’s house for a “my parents are away” party. This fact would of course have resulted in my having to stay home – that is, in my view, in the worst outcome imaginable. (Yes, I did consider law school early in life.)

In my defense, there were times when I didn’t know bad-boy Tom was having a party until after I’d received permission to go to Tracy’s house. On these occasions I asked for my Mom’s consent based on the best available incomplete information. Of course, as is the way with all mothers, she eventually found out where I’d been (even on the occasions when no police were involved). As a result, each of my subsequent requests for permission to go out elicited an ever more rigorous line of inquiry from her.

My (now) fond memory of these mother-daughter tussles was prompted by a recent article I read in the New York Times: “The Experts Were Wrong About the Best Places for Better and Cheaper Health Care.” Let me tell you why.

Until recently, the largest and best data-set available for the analysis and study of healthcare delivery in the U.S. was that based on Medicare claim data. Private-insurance statistics have long been almost entirely inaccessible for the same type of analysis and scrutiny, as they are held and managed by private companies that are not required to make them public.

This situation has left us scant choice but to make assumptions and decisions about how our healthcare system does and should deliver care using what I have come to term “best available incomplete information (BAII).”

As highlighted in the article I’ve cited above, Medicare data have revealed enormous amounts of information about regional differences in Medicare spending, which are driven mostly by the amount of healthcare patients receive, not the price per service.

Even more important, Medicare data reveal that places delivering lots of medical services to patients often do not have any better health outcomes than those locations delivering less medical care at lower cost.

These findings based on Medicare data have, by and large, been reduced to one simple message: if all healthcare systems could deliver care in the same way these low-cost ones do, the country’s notoriously high medical costs could be controlled, and might even decline.

On the face of it, this makes perfect sense. What’s missing, however, is how these systems are performing on the delivery of care to their non-Medicare patients. Are the results observed in one cohort of patients (Medicare) also the results for all other non-Medicare cohorts (private insurance, self-pay, etc.)? Data newly available from the Health Care Cost Institute (HCCI) about a large number of private insurance plans offer new hope that we may begin to answer these and other important questions more fully.

As a first high-level analysis described by the Times article reveals, places in the U.S. that have been heralded for low-cost, high-quality care delivered to Medicare patients are not necessarily performing in the same way for their private-insurance patients.

You can see these findings displayed in the side-by-side choropleth maps below.

(click to enlarge)

Displaying the data like this reveals that (for example) although Alaska’s per capita Medicare spending is average as compared to all other areas in the U.S., its per capita private-insurance spending is above average. The data reveal a similar pattern for several other areas in the states of Idaho, Michigan, and New Hampshire (for example), where Medicare costs are either average or below average, but private-insurance spending is above average.

This isn’t the only observable difference. Interestingly, in places like my home state of Massachusetts, the opposite of the above is true: Medicare spending is above average, while spending on private insurance is average across the state.

The Times article displays this information on the maps above and also in this simple but effective graphic (click here to check a place near you).

(click to enlarge)

I find these new data wildly interesting, am certain they will result in new findings, and devoutly hope that they will also lead to greater transparency in and other improvements to our healthcare system.

But this new information also serves as a serious and important reminder that we are all making decisions using the best available incomplete information currently available to us, and only that. As a result, we have to try to get better at understanding what it can and cannot enlighten us about, and how we will act when new information becomes available to us.

After I read the Times article and thought about its title, I found myself annoyed at what seemed a rather negative headline: “The Experts Were Wrong…” In fact, the experts were right about what the BAII they had at the time revealed. Was it the full story? Absolutely not. Do we know that full story yet? We do not: even this new analysis is missing data on patients insured by Blue Cross & Blue Shield and Medicaid, as well as on the under- and the un-insured. To put it another way: “We still don’t know what we don’t know.”

It seems to me that the only sensible path to improving our healthcare system is to commit ourselves to continually seeking new data, information, and knowledge to support better-informed decisions, and to seek the courage to adjust our sails and lead change by following – even when that path may be disappointing, confusing, or difficult – where the data lead.

Posted in Data Analysis, Newsletters | Leave a comment

Really Big Goals

Like a lot of people I am a big goal-setter. I especially love BHAG’s [pronounced “Be- hags”?]: Big, Hairy, Audacious Goals.

You know: the ones based on no logic or well-developed plan whatsoever, but rather conjured up by the sheer and (sometimes) delusional belief that “Somehow, I will find a way.”

Exhibit A: starting a business with a kid headed off to college and a big-ass mortgage (a technical term in my household). To be fair, I also set a lot of smaller and saner goals for myself: the amount of money I wanted to save for retirement each year; the number of trips to the gym each week.

As 2015 ends, I find myself going back over the year to consider what I accomplished compared to the goals I set, and visualize what I hope to accomplish in 2016. As is often the case, my mind wanders to different data visualization techniques and how I might display actual vs. desired progress using graphs.

Yes, I hear you: “Memo to self: add ‘Get out more!’ to my 2016 goal list.”

A graph I often use to show how well a group performs compared to a goal or benchmark is a deviation graph.

(If you are a regular subscriber to this newsletter, you’ll recall that I have written more than one article about these types of graphs; you can check out those articles by clicking here.)

I especially like them on monitoring dashboards, because the absolute value of a changing goal or benchmark is not displayed – only the difference or deviation of actual performance from it is shown, as in the following example.


(click to enlarge)

Displaying the information like this allows the viewer to quickly and easily answer questions such as “Are we over or under budget on revenue or expenses?” or to evaluate medication reconciliation versus a target without worrying about the actual goal or performance values, as they often change over time or are different for a group or category of similar metrics (department budgets, for one).

Such a display lets them know if performance is above or below goal, and by how much.

Sometimes, a goal is set for a longer time frame, and we wish to display its actual value compared to performance. Most often a line graph like the one shown here is used for this type of display.


(click to enlarge)

While this is a perfectly acceptable way to show the data, it doesn’t clarify how far from target we are.

This is where a deviation graph – one that displays the actual target value and the actual performance difference or deviation, such as in the one below – can help.


(click to enlarge)

This data display makes clear that the target is 90% on medication reconciliation, and how far below (orange bars) or above (blue bars) monthly results are. It’s also possible to see actual performance by comparing the ends of the bars to the Y-axis.

All three of these displays work – as long as they respond to these key criteria:

  • are target values fixed or variable?
  • is it enough to simply monitor deviation, or must actual values be displayed?

As I write this, and consider my options for displaying my own performance compared to my 2015 goals, I am beginning to get a little spooked. I may after all be awash in orange (below target!!) for some time.

But then I remember a famous comment by American author, salesman, and motivational speaker Zig Ziglar: “If you aim at nothing, you will hit your target every time.” Back to that BHAG list – and onward.

Posted in Design Basics, Graphs, Newsletters | 1 Comment

Postcard from New Zealand

It has been almost a month since my return to the States, following a truly gratifying professional engagement with the Canterbury Health District and Health Informatics New Zealand (HINZ).

If you’ve ever had the pleasure of traveling to New Zealand, you know all the accolades about it are true… true and oh so very, very true! The landscape is spectacular, the people are lovely and yes, of course, there are far more sheep in New Zealand than there are people. Lots and lots of sheep, like this darling little lamb at the Walter Peak High Country Farm in Queenstown (which, of course, they only let me hold after they had served me his brother for lunch – clearly a brilliant strategy to reduce the number of requests for vegan meals.)


Given all the sheep and lambs we saw (it is spring in New Zealand now so there are even more lambs than usual!), it is no surprise that I started to think yet again about the great utility of small multiples to display our healthcare data.

If you are up on your data-visualization terms, you know that it was Edward Tufte, a statistician and Yale University professor, and a pioneer in the field of information design and data visualization, who coined the term “small multiples.” (You may be familiar with other names for this type of display: Trellis Chart, Lattice Chart, Grid Chart or Panel Chart.)

I think of small multiples as displays of data that use the same basic graphic (a line or bar graph) to display different parts of a data set. The beauty of small multiples is that they can show rich, multi-dimensional data without attempting to jam all the information into one, highly complex chart like this one:


Now take a look at the same data displayed in a chart of small multiples:


What problems does a small-multiples chart help solve?

  1. Multiple Variables. Trying to display three or more variables in a single chart is challenging. Small multiples enable you to display a lot of variables, with less risk of confusing or even losing your viewers.
  1. Confusion. A chart crammed with data is just plain confusing. Small multiples empower a viewer to quickly grasp the meaning of an individual chart and then apply that knowledge to all the charts that follow.
  1. Difficult Comparisons. Small multiples also make it much easier to compare constant values across variables and reveal the range of potential patterns in the charts.

Now, before you construct a small-multiples data display, here are a few additional pointers:

  1. Arrangement. The arrangement of small-multiples charts should reflect some logical measurement or organizing principle, such as time, geography, or another interconnecting sequence.
  1. Scale. Icons and other images in small-multiple displays should share the same measure, scale, size, and shape. Changing even one of these factors undermines the viewers’ ability to apply the understanding gained from the first chart to subsequent charts or display structures.
  1. Simplicity. As with most things in life, simplicity in the small-multiples chart is crucial. Users should be able to easily process information across many charts, and see and understand the story in the data.

I still go a little soft when I think of holding that darling lamb and patting its ears as it fell asleep in my arms. And while it is highly likely that this sweet memory will fade and I may eventually eat lamb once again, I will always remember seeing pasture after pasture of these gentle creatures and will continue to relate them to small multiples to display data!

Posted in Design Basics, Graphs, Newsletters | Leave a comment