Background
The numerous currently available public hospital quality rating systems
frequently offer conflicting results, which may mislead stakeholders relying
on the ratings to identify top-performing hospitals. Given that there is no
gold standard for how a rating system should be constructed or perform and
no objective way to compare the rating systems, we evaluated the strengths
and weaknesses of four major public hospital quality rating systems based on
our experience as physician scientists with methodological expertise in
healthcare quality measurement. Objectives >>
Results
No rating system received an “A” or an “F.” The highest grade received was a “B” by U.S.
News & World Report. The Centers for Medicare & Medicaid Services’ Star Ratings received
a “C.” The lowest grades were for Leapfrog, C-, and Healthgrades, D+. Each rating system
had unique weaknesses that led to potential misclassification of hospital performance,
ranging from inclusion of flawed measures, use of proprietary data that are not validated,
and methodological decisions. More broadly, there were several issues that limited all of
the rating systems we examined: Limited data and measures, lack of robust data audits,
composite measure development, measuring diverse hospital types together, and lack of
formal peer review of their methods. Opportunities to advance the field of hospital
quality measurement include the need for better data subject to robust audits, more
meaningful measures, and development of standards and robust peer review to evaluate
rating system methodology. Conclusions >>
Conclusions
In this Rating the Raters initiative, we found that the current hospital quality rating
systems should be used cautiously as they likely often misclassify hospital performance
and may mislead stakeholders. These results can offer guidance to stakeholders attempting
to select a rating system for identifying top-performing hospitals.