Is health technology assessment morally defensible?

Capturing race

Is HTA like GO? (Photo credit: Wikipedia)

Increasingly widespread amongst the world’s healthcare systems is the assessment of medicines and devices using various types of cost-benefit or cost-utility analysis; this is called health technology assessment or HTA. HTA seeks to determine, using evidence of one sort or another, whether something is broadly speaking affordable, taking account of the cost of the medicine/device taken against the benefit to a particular constellation of diagnostic attributes in patients. This is usually quantified in a measure called a QALY: a quality-adjusted life year, which is a way to assess the value for money of a particular health technology. In short, it is a way of valuing lives.

HTA is a utilitarian approach to assessment. To some extent, this is not surprising as HTA is in the main a method developed by health economists, who, like economists in general, hypothesise that we make daily decisions based on the utilty of this or that, in terms of trade-offs (Pareto optimisation, for instance) and rational decision making (that people seek to maximise value, or utility in what they do). This approach is increasingly in dispute in light of the findings from neurosciences and behaviour economics: by posting that people do not always make decisions that are in their own best interests, a key assumption of traditional economics, that of the rational actor, always calculating trade-offs and maximising benefits, and so on, is questioned.

The problem with utilitarianism, though, is it doesn’t pay attention to the freedom of the individual; it positions the justification of its results on the net benefit to society, regardless of the impact on rights of individuals. Obviously, health economists don’t watch Star Trek or they would know that the needs of the one outweigh the needs of the many. But then, that, too, is a moral position.

Indeed, it is perhaps the sense that utilitarian conclusions don’t seem to correlate with many people’s moral sentiments that may explain why decisions of HTA agencies, for instance NICE in the UK (England) lead to moral outrage and a sense of, if not injustice, at least unfairness. While the results of an HTA process may lead to a quantitatively defensible conclusion, people sense that this conclusion is not morally defensible.

How are we to judge? Few would use utilitarian arguments in this way in other spheres: would we calculate who needs welfare in terms of the net benefit to society in terms of quality of life years, though perhaps we do allocate welfare on moral assumptions that some people deserve welfare while others don’t.

Do we allocate support to communities ravaged by floods based on their overall contribution, or utility, to society.  If you could donate £10 million to a university, would you pick Oxford University or Thames Valley University; which one is more worthy? But would you want to treat people this way?

HTA doesn’t even let us value lives in quite this way, since it neatly avoids deciding about the worth of any particular type of person, who just happens through misfortune to find themselves needing some medicine that fails the HTA tests. HTA keeps us from confronting the fact that HTA is a way of drawing a conclusion, without actually having to decide any allocations for any one person in particular. Bentham would approve.

There is, though, a technical problem with HTA and it has to do with whether at one level of assessment outcome, a utilitarian models can be used when the decision to be made does not have life threatening consequences for some people.

If the QALY threshold is, say £35,000, as it apparently is in the case of NICE, are the decisions below that threshold, which tend toward ‘yes’ or ‘approval’ morally different from decisions above that threshold?  I suggest that different moral criteria come into play above the threshold and this is where I think out moral outrage should be directed and where HTA fails.  Regretfully, HTA models see the results as broadly continuous, that is, decisions above and below this threshold are seen as essentially of the same type.  But I have argued elsewhere that above the threshold, HTA models fail but for reasons other their analytical soundness, because above this threshold, the conclusions may lead to a lessened quality of life, in other words, they actually crystallise the health outcome rather than avoid it.

Therefore, in valuing lives, those above the threshold experience greater injustice than those below; they are treated differently, unfairly, unjustly, perhaps less worthy, but certainly differently.  Indeed, above the threshold, we feel we are more in the realm of our moral sentiments about the value of human life, and less our moral sentiments about the allocation of scarce resources.

If this were not so, then we would be living in a society that believes that the determinant of all important moral and political decisions is affordability, and if that were so, they we could not even afford the costs of inefficiency brought on by democracy, the inconvenience of not being able to exploit people, the costs of equal rights.

Perhaps, though, on our financially contaminated world, all we can think about today is money and that is further contaminating our perception of what sort of society we are actually trying to foster.  Certainly, protests on Wall Street and elsewhere point to the view that there seems to be some unjust allocation of the benefits of government bail-outs that just doesn’t benefit those ‘at the bottom’.

John Rawls wrote that the we should distribute opportunity in a society in such a way as to ensure that the least well off benefit the most. In the context of HTA, medicines and technologies that benefit only a few, but at great cost, represent a cost worth having as the least well off, namely those who would need it most ( have the condition it treats, and in some societies can afford it least), would benefit, even if a little, as that is the price we pay for justice.

This, I suggest, is the root of our moral outrage at HTA, that is unjustly fails to serve those who need it most.

I am left with wondering about the underlying morality of HTA as a government scheme. Governments, as we know, are the last resort, when things are tough and one would hope, ensure that the least well-off in society are not penalised simply in virtue of being least well-off.  In healthcare, someone has to be the carer of last resort; using HTA as a way of avoiding this responsibility is not morally defensible.

Enhanced by Zemanta