Let’s Do Some Simple Math

Submitted by hci3-usr on Friday, December 30, 2016 - 05:47

Newtown, CT – December 30, 2016

Let’s do some simple math – Hospital A performs 260 joint replacement procedures in a year and Hospital B performs 240, therefore, A does more than B. And if I told you that the average across all hospitals was 240, you may conclude that A is doing 20 procedures a year too many. But you would only conclude that if you knew very little about health care or had pretty poor analytic skills, because without knowing the size of the underlying market, it may be very difficult to draw an appropriate conclusion. So let’s add just one more simple variable, which is the number of Medicare patients in each hospital’s local area, which represents the underlying market for that procedure. As it turns out, Hospital A has 18,000 Medicare beneficiaries in its local area and B has 12,000, and proportionally, A does far fewer procedures per hundred beneficiaries than B. Of course, at that point, a serious researcher would likely want to know whether the two populations had similar characteristics, or if, for example, one of the two populations was simply made up of younger, healthier Medicare beneficiaries. For years the Dartmouth Atlas has done such studies, examining not just the raw volume of procedures per hospital, but by hospital referral region so as to more appropriately compare one site to another. And yet, recently, one of Dartmouth’s own seemed to set aside the importance of contextual information in comparing the performance of providers. In a JAMA editorial, Elliott Fisher concluded that hospitals participating in the Medicare Bundled Payment for Care Improvement for joint replacements, not only performed more surgeries, but that those were likely inappropriate and that the entire BPCI program was therefore a failure. It’s time for a math lesson.

What this means to you – It’s unclear what’s more disturbing, the lack of serious analyses by Fisher, or the apparent lack of any real methods scrutiny by JAMA, because both are pretty bad. But don’t take my word for it, do the math yourself with the open data we’re providing. In it you will find everything you need for a reasoned analysis, including (1) the actual discharges by year for DRGs 469 and 470, for every hospital in the US, by year; (2) the associations of hospitals to hospital referral regions; (3) the number of Medicare beneficiaries by HRR and by year; (4) the hospitals and other providers participating in the BPCI, the focus of their participation and the enrollment date in the program; and (5) certain other characteristics for each HRR for two separate years that you could consider the baseline year and the intervention year. All of these data, except for (1) were downloaded from public use files. Our friends at CareSet/DocGraph provided us with the discharge counts because they have direct access to the Medicare data warehouse and are super cool. As you dive into the data, consider that some of the hospitals that were included in the original BPCI study enrolled much later than others and that any increase in volume they may have experienced is likely due to factors other than the BPCI incentives. Consider also that intervention and comparison cohorts can suffer from small sample size effects. And consider the importance of putting procedure volume in the context of market characteristics. As you perform your analyses, reach out and share your conclusions, submit them for publication, or perhaps write a letter to the JAMA editor about the merits of simple, but correct math. Either way, let’s make sure the real facts are published, not ones made up to advance a particular policy point of view. We’ve done the math and will share it soon enough as our way of kicking off the new year with a bang.

Regards,

Francois Sig

»

«