The other "Big C"

Cancer. The Big C. No one wants money to stand in the way of curing a patient.

But real life is messier. Many new treatments for cancer are pricey yet provide only marginal gains over existing therapies in life expectancy and/or quality of life. Forty thousand dollars for a cure is not a real dilemma for policymakers –the same spend for an extra six weeks of life is another story.

In the United States we still pretend cost effectiveness doesn’t matter, even though cost is frequently taken into account implicitly and secretly. Europe in particular has moved past that point, and takes cost effectiveness into consideration in coverage and reimbursement policies explicitly. We’re slowly moving in that direction in the US, too. Call it rationing if you want; as long as it’s understood that BMWs and mansions are rationed, too.

As cost becomes more important, payers, policymakers and physicians need a robust body of research as a basis for decision making. The Tufts Medical Center Cost-Effectiveness Analysis Registry, maintained by the Center for the Evaluation of Value and Risk in Health (CEVR), catalogs over 2000 cost-effective analyses that have been published in the peer-reviewed literature since the mid-1970s. In a recent paper in the Journal of the National Cancer Institute (When is Cancer Care Cost-Effective? A Systematic Overview of Cost-Utility Analyses in Oncology), CEVR director Peter J. Neumann, ScD and others review 242 cancer-related cost effectiveness papers.

Neumann has been working for years to encourage improvement in cost-effectiveness research. Ten years ago he published a paper examining the quality of oncology-related cancer research. This new paper examines how far things have come since then.

…[A]dherence to recommended methods for conducting and reporting [Cost Effectiveness Analysis] results (e.g., applying a societal perspective, discounting both costs and [Quality Adjusted Life Years], providing a clear presentation of the intervention, comparator and the target population) was high and has somewhat improved over time. During 2002-2007, almost all studies clearly presented the relevant intervention, the comparator, and the target population. The proportion of studies that correctly calculated [Incremental Cost-Effectiveness Ratios] increased from 48% before 1998 to 84% after 2001. Most studies performed a sensitivity analysis to explore uncertainties in cost-effectiveness results, and the proportion of studies that presented a probabilistic sensitivity analysis increased from zero during 1976–1997 to 44% during 2002–2007.

As a member of CEVR’s Executive Advisory Board, I am encouraged by the findings of this study. As cost-effectiveness discussions become socially acceptable in the US and as the government steps up support for cost-effectiveness research, it’s worth noting that we’re not starting from scratch. There’s still plenty of room for improvement, but CEVR and others have laid a solid foundation.

Those who oppose taking cost-effectiveness into account on ideological grounds should be aware that even oncologists care about treatment costs.

April 13, 2010

One thought on “The other "Big C"”

  1. What influences patients the most, in my experience, is their knowledge of risks/benefits. If people with refractory cancer really knew the low probability that treatments would help, and were fully aware of the likely side effects, they might decline therapy based on non-financial considerations.

    My point is that well-informed patients might opt for less treatment of their own volition.

Leave a Reply

Your email address will not be published. Required fields are marked *