Category Archives: Quality

Why the scandal at the Veterans Administration health system matters to everyone

Health care systems

How to choose? (Photo credit: Wikipedia)

The scandal that has now led to the ‘resignation’ of the head of the US VA Health System matters to more than just the US and US veterans. The VA health system is the closest thing the US has to the UK’s NHS and to the health systems of many other countries where the state is the controlling force.

According to reports in the New York Times, three factors are relevant and expanded in Forbes:

  1. shortage of physicians
  2. perverse incentives
  3. culture of dishonesty

Boiling this down to critical factors that are relevant to systems outside the US leads to specific considerations to countries which try to control healthcare through greater state intervention:

  1. Physician shortages are caused in the main by health systems limiting access to medical schools (and indeed to other professions). There is far too much evidence that labour force forecasting is inaccurate and given the highly specialised nature of healthcare, we really don’t know how many doctors, nurses, etc. we need, just that it is unlikely the current system of rationing produces sufficient supply. While the costs of training of health professionals are high, the rewards are also high and of good quality. These benefits accrue to the individuals as well as society. Why, though, the public purse should subsidise this as much as it does, and also limit access needs to be rethought.
  2. Health systems use a variety of incentives to coerce or alter clinical behaviour. While putting doctors on the payroll is assumed to limit financial conflicts of interest, it embeds clinical behaviour within a managed system full of rules and regulations which invariably will put administrative convenience above clinical and patient needs. Falsifying records is nothing new, but using data to influence rewards only creates the incentive to game those rules to maximise the benefits. Gaming of incentives is not new, but it is possible to model/test whether the proposed incentives will work and how they might be perverse.
  3. Dishonesty is embedded in the culture of work, and rooting out dishonesty needs to go back to, perhaps incentives again, to understand why it is more beneficial to lie. This may exist more easily within highly bureaucratised systems, where people are dislocated from the patients, and see themselves simply tasked with ensuring the stability of the system. This is a tough one but in some countries doctors’ employment contracts explicitly put them in a conflict with their employers by emphasising the relationship between their work and costs.

As the US has noted on the VA, the system often put the doctor in a conflict of interest between the patient and their paymaster, the government. Many countries have the same arrangements and should not, therefore, be complacent.

It is certainly timely and appropriate for policymakers and those who think systematically about healthcare systems, to study carefully what happened at VA, and apply that learning on their own healthcare systems. I am sure there would be much to think about.

If anyone wants to do this, give me a call.

 

 

 

 

Conditions for good research

Just about every country has identified life sciences in some form or other as a priority for academic and commercial development. But what will characterise the countries that may in the end prevail?

  1. The research community needs a high degree of autonomy. The European University Association released an interesting study, University Autonomy in Europe II: the Scorecard, in 2011, assessing the degree of institutional autonomy universities in the various EU member states enjoyed. The countries with the greatest university autonomy were from northern Europe: Denmark, Ireland, UK, Finland, Sweden Latvia, Lithuania. Those with highly regulated and state controlled systems were from southern Europe, or had systems where the state just likes to intrude: France, Luxembourg, Greece, Italy and others. To be fair, some countries were more or less autnomous on different indicators, but the rough distinction can be drawn. Surprisingly, at least to me, was the middling performance of countries like the Netherlands, Austria and Germany. No doubt various higher control states will endeavour to justify why the state needs to be so intrusive, but as evidence that this is perhaps an unhealthy state of affairs, we see the highly instrusive French state over the past year moving to create greater diversity and differentiation in funding for its universities with greater autonomy (see this news item for instance). Clearly, greater autonomy necessitates greater diversity and differentiation and in the end some will need to become better than others. While we would like to think that all universities are essentially the same, reality suggests that the only real equality lies in the extent to which they all meet minimum standards, rather than all trying to meet some arbitrary ‘gold standard’.
  2. The second point is that the bulk of significant research results in life sciences arise from centres known as academic health science centres (AHSC). This is a theme I warm too, as it provides an organisational model that drives innovation from the clinical user end, rather than from the research end. Yes, more research funds are always needed, but we also need solutions. Efforts to operationalise translational medicine are doomed to fail if the driving forces are not coupled to the clinical user and innovation policies in general need to start with problems needing solutions, and hence a factor more likely to be evidenced. Only a few countries have AHSCs — such as US (over 50), Canada (about 14), Sweden (1), Belgium (1), Netherlands (8) and the UK (5). Germany arguably has at least one as does Italy. France has none, and one will need to see whether changes in their higher education system are likely to lead to formal establishment of this approach. The challenge (and this was the subject of a paper I presented, see the previous entry below), is that while universities are more likely to enjoy degrees of autonomy, hospitals are less likely to. The UK was only able to move toward establishment of AHSCs when the state control of the hospitals was relaxed through successive periods of NHS reform. The Netherlands model built on existing relationships. Countries without AHSCs, though, will confront the twin challenge of institutional autonomy of both universities and hospitals.
  3. The third point is that not all countries will be able to do everything in life sciences and therefore will need to set some priorities. National priorities are hard to conceive, because countries usually think of themselves as being able to do everything and so efforts for instance, get diluted and underperform. Cash is tight these days (think debt) and governments just cannot afford everything, so the most difficult challenge is establishing priorities.

“Triple Aim” for regulation

We are awash with regulation. Healthcare and medicines are particularly affected.

For instance, in their wisdom, European lawmakers have deemed it inappropriate for medicines to be advertised. And this in the 21st century, with open information access, calls for transparency and the empowered and informed patient. Of course, the logic of such restrictions reflect real-world anxieties, but they also reflect the anxieties of another age. If we examine regulatory practices, we’ll find that in the main they use instruments that would have been popular in the 1950s and 1960s. Today, more subtle and information rich tools are available.

We regulate to coerce people and organisations to behave in certain ways that they would not, of their own volition, otherwise do. This coercion is legitimate if it arises through due process and democratic accountability, and not just the whim of the regulator or government. Sometimes this coercion has perverse consequences, such as with medicines where the legitimate manufacturer of a product is prohibited from publicising a product, but all manner of snake-oil salesman can make all manner of inappropriate claims for medicines, each pits or peanut butter as a sunscreen! The truth lies somewhere but regulations make truth-telling more difficult and not always in the public interest.

What I want to propose draws its inspiration from the US, with Don Berwick and colleagues suggested Triple Aim tool for determining high-value intervention targets (the three are: quality of care, patient satisfaction, and cost).

The Regulatory Triple Aim would comprise three tests, the simultaneous failure of which would indicate that the proposed regulation should not be considered further.

  1. Will the regulation produce poor quality or substandard outcomes? This is likely to be measured through evidence or insight into perverse consequences, weak enforcement, lack of suitable performance data, etc.
  2. Will the regulation produce dissatisfaction amongst the regulated? This is comparable to the patient satisfaction and goes to whether the regulation is appropriate and proportionately coercive and will it enjoy high degrees of compliance.
  3. Are there avoidable costs associated with the regulation? This is an interesting test as it actually asks two things: [1] is there an incremental burden of costs associated with regulation and [2] is the cost proportionate to the benefits.

As a formula, we have Quality (#1) + Satisfaction (#2)  divided by Costs (#3) = Value for Money.

We need ways to sharpen our focus on regulation, and we need to ensure that there is not too much of it for the value we seek to achieve.

Let’s test it with the regulation that was intended to control refillable olive oil containers or pots in restaurants, something that more insightul minds eventually decided was a silly thing to do (I’ll wager though that has just gone into hibernation while a study is commissioned to find evidence that such refillable containers are full of fake olive oil or somesuch and then it will re-emerge), but it did get a long way along the regulatory process without anyone (group think?) challenging it — are people really that dumb? I wonder how that happened — did no one apply a Wilson matrix to this to see if the distribution of costs and benefits was properly understood? Anyway, back to Triple Aim.

Olive Oil in Refillable Pots or Containers

Would regulating olive oil in refillable pots or containers …

1. … produce poor quality or substandard outcomes?

2. … produce dissatisfaction?

3. … create avoidable costs?

Triple failure, meaning answering YES to each question, would suggest this would not be a good idea.

Post your assessments and comments. Obviously, if you’ve got better examples, (such as regulation of clinical trials or whatever) please feel free to expand the scope.

 

 

Could the medicine you don’t take properly kill you? It may depend on your age.

The lost of my trilogy picks up on medicines again.

In the US, not taking medicines correctly is thought to be the fourth leading cause of death – could this be true?

WHO data on mortality captures medicines use in a variety of categories. I ran the data on the categories concerned with medicines-related harm (ICD-10 codes: X40-X44,X60-X64,Y10-Y14,Y45-Y47,Y49-Y51,Y57). It is less than 1% for any European country for the whole population, but breaking it down by age cohorts reveals interesting results that show at different ages, in different EU countries, there are age-related variations in cause of death, rising with age. None, however, emerges as a leading cause of death on its own.

However, medicines use sit within a system of patient care. Therefore, medicines misuse and medication errors may create conditions for a co-morbidity to assert itself. And of course, whether the drugs were toxic for the patient at a particular dose (keep in mind, pills for instance come in standard sizes and may need to be cut in half or so to get an accurate dose for a patient). Working through this data, though, does highlight areas to pay attention to, and in particular countries where there appear to be noteworthy higher risk. I’d like to see better analysis of medication errors.

Once again, before we target the drugs bill as being out of control, let’s get a better understanding of the dynamics of medicines use itself. We may be spending money foolishly or carelessly. What are the incentives in health systems that may actually encourage this sort of professional conduct?

The devil is not just in the detail, but in the data and in clinical practices.

Want to know more?

WHO datasets are here: http://data.euro.who.int/dmdb/

 

The cost of medicines waste

In these days of trying to better understand the determinants of rising healthcare expenditure, it is productive to look in the waste bin, to see what is being thrown away. Let’s look in the waste bin and see what medicines we find.

Medicines waste is medicines given to patients that they do not take. But this needs to distinguish between actions taken by the patient, and other factors since not all wastage is patient non-adherence.

The costs include the cost of the medicine itself, but also the changed procedures in pharmacies to reduce patient-related waste (procedural costs drive duplicate medicines ordering on hospital wards, for instance). There is also the costs associated with safe disposal of the medicine waste itself and how patients dispose of unwanted/unused medicines. Environmental contamination by pharmaceuticals is of rising concern. [Pharmaceuticals in the Environment, European Environment Agency, 2010].

Considerable medicines waste arises because the patient has died and correlates with condition: 100% return for anaesthetic drugs, 60% for drugs used in immunosuppression/malignant disease, 26% for cardiovascular conditions, 19% for drugs used for infections. This suggests that gross wastage data needs to be viewed with some care.

Reducing the stock held by patients in the home shifts the stocking costs to pharmacies. UK evidence suggests that “if all repeat prescriptions in 2008 had been issued at just 28 days, then total pharmacy costs would have been even higher – around £2.3 billion, or 28% of the net cost of medicines dispensed.” [Gilmour review on prescription charges, “Medicines Wastage” Prescription charges review: implementing exemption from prescription charges for people with long term conditions, May 2010] This suggests that included in wastage costs are pharmacy dispensing charges.

As in all cases of healthcare expenditure, the challenge involves a complex mix of activities and stakeholders. We need much better tracking of waste, if only to ensure we do not inappropriately target expenditure of medicines without first ensuring that medicines that are being bought are properly used. Industry, healthcare and regulators can usefully work together here.

I haven’t mentioned the environmental impact of flushing unused medicines down the toilet. I’ll let your imagination go to work on that one.

Want to know more?

Evaluation of the Scale, Causes and Costs of Waste Medicines, Final Report, York Health Economics Consortium/School of Pharmacy, London, 2010. This has a good international literature review of costs, but caution is needed in the context of the comments below.

Kummerer K, Hempel M (eds) Green and Sustainable Pharmacy, Springer 2010. See page 170 in for a table of waste by country, but not costed.

Is health technology assessment morally defensible?

Capturing race

Is HTA like GO? (Photo credit: Wikipedia)

Increasingly widespread amongst the world’s healthcare systems is the assessment of medicines and devices using various types of cost-benefit or cost-utility analysis; this is called health technology assessment or HTA. HTA seeks to determine, using evidence of one sort or another, whether something is broadly speaking affordable, taking account of the cost of the medicine/device taken against the benefit to a particular constellation of diagnostic attributes in patients. This is usually quantified in a measure called a QALY: a quality-adjusted life year, which is a way to assess the value for money of a particular health technology. In short, it is a way of valuing lives.

HTA is a utilitarian approach to assessment. To some extent, this is not surprising as HTA is in the main a method developed by health economists, who, like economists in general, hypothesise that we make daily decisions based on the utilty of this or that, in terms of trade-offs (Pareto optimisation, for instance) and rational decision making (that people seek to maximise value, or utility in what they do). This approach is increasingly in dispute in light of the findings from neurosciences and behaviour economics: by posting that people do not always make decisions that are in their own best interests, a key assumption of traditional economics, that of the rational actor, always calculating trade-offs and maximising benefits, and so on, is questioned.

The problem with utilitarianism, though, is it doesn’t pay attention to the freedom of the individual; it positions the justification of its results on the net benefit to society, regardless of the impact on rights of individuals. Obviously, health economists don’t watch Star Trek or they would know that the needs of the one outweigh the needs of the many. But then, that, too, is a moral position.

Indeed, it is perhaps the sense that utilitarian conclusions don’t seem to correlate with many people’s moral sentiments that may explain why decisions of HTA agencies, for instance NICE in the UK (England) lead to moral outrage and a sense of, if not injustice, at least unfairness. While the results of an HTA process may lead to a quantitatively defensible conclusion, people sense that this conclusion is not morally defensible.

How are we to judge? Few would use utilitarian arguments in this way in other spheres: would we calculate who needs welfare in terms of the net benefit to society in terms of quality of life years, though perhaps we do allocate welfare on moral assumptions that some people deserve welfare while others don’t.

Do we allocate support to communities ravaged by floods based on their overall contribution, or utility, to society.  If you could donate £10 million to a university, would you pick Oxford University or Thames Valley University; which one is more worthy? But would you want to treat people this way?

HTA doesn’t even let us value lives in quite this way, since it neatly avoids deciding about the worth of any particular type of person, who just happens through misfortune to find themselves needing some medicine that fails the HTA tests. HTA keeps us from confronting the fact that HTA is a way of drawing a conclusion, without actually having to decide any allocations for any one person in particular. Bentham would approve.

There is, though, a technical problem with HTA and it has to do with whether at one level of assessment outcome, a utilitarian models can be used when the decision to be made does not have life threatening consequences for some people.

If the QALY threshold is, say £35,000, as it apparently is in the case of NICE, are the decisions below that threshold, which tend toward ‘yes’ or ‘approval’ morally different from decisions above that threshold?  I suggest that different moral criteria come into play above the threshold and this is where I think out moral outrage should be directed and where HTA fails.  Regretfully, HTA models see the results as broadly continuous, that is, decisions above and below this threshold are seen as essentially of the same type.  But I have argued elsewhere that above the threshold, HTA models fail but for reasons other their analytical soundness, because above this threshold, the conclusions may lead to a lessened quality of life, in other words, they actually crystallise the health outcome rather than avoid it.

Therefore, in valuing lives, those above the threshold experience greater injustice than those below; they are treated differently, unfairly, unjustly, perhaps less worthy, but certainly differently.  Indeed, above the threshold, we feel we are more in the realm of our moral sentiments about the value of human life, and less our moral sentiments about the allocation of scarce resources.

If this were not so, then we would be living in a society that believes that the determinant of all important moral and political decisions is affordability, and if that were so, they we could not even afford the costs of inefficiency brought on by democracy, the inconvenience of not being able to exploit people, the costs of equal rights.

Perhaps, though, on our financially contaminated world, all we can think about today is money and that is further contaminating our perception of what sort of society we are actually trying to foster.  Certainly, protests on Wall Street and elsewhere point to the view that there seems to be some unjust allocation of the benefits of government bail-outs that just doesn’t benefit those ‘at the bottom’.

John Rawls wrote that the we should distribute opportunity in a society in such a way as to ensure that the least well off benefit the most. In the context of HTA, medicines and technologies that benefit only a few, but at great cost, represent a cost worth having as the least well off, namely those who would need it most ( have the condition it treats, and in some societies can afford it least), would benefit, even if a little, as that is the price we pay for justice.

This, I suggest, is the root of our moral outrage at HTA, that is unjustly fails to serve those who need it most.

I am left with wondering about the underlying morality of HTA as a government scheme. Governments, as we know, are the last resort, when things are tough and one would hope, ensure that the least well-off in society are not penalised simply in virtue of being least well-off.  In healthcare, someone has to be the carer of last resort; using HTA as a way of avoiding this responsibility is not morally defensible.

Enhanced by Zemanta

Healthcare statistical mumbo jumbo

English: Diagram relating various pre-test pro...

Huh? (Photo credit: Wikipedia)

The media do have considerable trouble reporting health statistics partly because these statistics often report probabilities, estimates, and approximations. Phrases like “x times more likely” abound. Without knowing what the base likelihood is, we have no idea whether this is a lot or a little. So small numbers can sound impressive and people can be easily mislead into think that they might live forever. Like reporting that 42% of the population will die with or from cancer — the difference is important: men frequently die with prostate cancer, but not from it.

What do you think this paragraph means from The Guardian newspaper: (by the way, a search was unable to locate the relevant document the article was based on. Newspapers should these days cite the names of the documents, with links, to enable independent followup.)

“Twenty-year-olds are three times more likely to reach their 100th birthdays than their grandparents and twice as likely as their parents, official figures show. A baby born this year is almost eight times more likely to reach 100 than one born 80 years ago, according to the figures issued by the Department for Work and Pensions.  A girl born this year has a one-in-three chance of reaching their 100th birthday, while boys have a one-in-four chance.”

Many people look to the media for information on health, but it doesn’t help when within a single paragraph (!) we are confronted with this rush of statistics.

They sound important, like they ought to mean something. But what? Can these statistics be converted into something that might actually shed light on what the the numbers might mean or is the newspaper just repeating statistics in the usually confusing way papers do? (Another example of where papers confuse when they report statistics, is they’ll say something like the number of mortgages issues declined by 1% last month; of that 200 were remortgages. Huh?)

Today’s grandparents were probably born, say, 1930, when the life expectancy was about 60 years, while today it is about 75, and for a twenty year old today it is estimated at 100, 80 years from now. Life expectancy rose about 15 years between 1930 and today (about 80 years) and will rise a further 25 years by the year 2090. Hmmm, that suggests growth in improvement in life expectancy is accelerating as it will increase 40% or so more over the next 80 years than if it just continued at a steady, linear, pace.

Most people die by 100, and certainly for this discussion, we could say 99% of the population born in 1930 will be dead by 2030. So I had a tiny chance of living to 100 if I were born in 1930 and now a baby born today has an 8 times chance, which still seems like quite a small number. We also know it is twice as likely as that person’s parents, say born in 1950 of whom most will also be dead by 2050.

Let’s be generous: 1% of the population lives to 100 born in 1930, now 8% of the population will live to 100. Is that what they are saying? But it also says that boys have a 25% chance of having a 100th birthday, while girls have a 33% chance. Are they saying that of 100 boys, 25 ‘may live to 100’, and and is that broadly equivalent to an 8 times improvement over their grandparents? Hmmm.

So how many boys born today will live to 100? And how many girls? Answers need to take account of the probabilities, so we also need to know if the various statistics in the quote above are compatible with each other or are they inconsistent? Do you think an average person would understand the article? (By the way, we know that doctors often misunderstand what statistics like this mean when referring to the likelihood or not that people may or may not acquire a particular disease or condition, so if that is true, what are the chances for the rest of us: 1 in 50…..?)

Post your answers.

QED, I think.

 

Enhanced by Zemanta

Crispy critters: Public Accounts Committee roast the NHS IT project

I was watching the Public Accounts Committee on 23 May 2011 take evidence from IT suppliers and NHS executives on the NHS IT contracts. This monstrous contract was doomed from the start, yet few seemed to be in a position of influence to alter the ‘group think’ that prevailed in government. Civil servants and ministers seemed to breath each other’s air as they pursued this pig in a poke. Worringly, the PAC exchanges shed a bit of light but more revealing was the lack of common language amongst those concerned. Frequently, answers were not relevant to the question, used jargon or introduced further obfuscation.

In the end, whether supplier or NHS exec, the PAC was faced with a sea of denial, avoidance, or sheer hubris. I say hubris as NHS executives in particular were at pains to avoid rocking their own boat by being completely candid about things, preferring warm phrases that all was well, despite the CEO of the NHS being unable to answer many questions clearly, and seemed painfully ill-informed of his brief.

Evidence of obfuscation abounded as the MPs had to ask suppliers many times to answer with yes/no to what were straightforward questions. I was impressed with the efforts of some MPs (Bacon in particular) to get clear answers to important questions.  As a rule, complex answers betray a lack of understanding of the underlying logic — there are simple answers to these questions, not ‘it depends’ or ‘you’re comparing apples and oranges, pears’; indeed, at one point, the sessions seemed more about the comparative merits of different fruits than IT procurement. As well, the lack of clarity of underlying logic also evidences people were unable to agree on what the core problems were.  Now, granted for some this is likely to be a complex problem (in the technical sense of the word, a wicked problem), but I doubt that — the NHS’s needs and responsibilities are complex, but an electronic health record is a thing, with a defined functionality.

I remember sitting in a room just as this NHS IT for heatlhwas being firmed up (2002), and hearing the Director (Granger) at the time speak glowingly of the benefits. Upon hearing this, others in the international teleconference asked, “surely you’re not serious about doing this”, to be told, “absolutely”. As is said, act in haste, repent at leisure.

An important question was, knowing what we know today, was the original decision to proceed with this central and top-down approach sensible? The answers were evasive and broadly technically wrong. In 2002, it was perfectly possible to develop distributed systems, with broadly distributed functionality using various systems integration options to enable diverse technical architectures to co-exist to deliver uniform service. No one wanted to think that way for a couple of reasons. The first is ego: grand plans appeal to people’s ego needs, to be in charge of something big. The Director at the time exhibited serious Machiavellian behaviours, and failed miserably to engage users.  The second is conceptual: at the time, Department of Health and NHS executives were still thinking the NHS was a single lumpen thing that needed single solutions to its complex problems. In the early 2000s and late 1990s, that the NHS should be seen as a complex adaptive system was understood, but not acknowledged as it flew in the face of prevailing ideology about central control, driven by the mistaken (technical) belief that a distributed system, while diverse and pluralistic, would be unable to deliver a common standard of performance.

In the end, you end up with a system that is rigid, technically obsolete as soon as it starts operating and because it fails to evolve with changing clinical needs, which will change as clinicians become familiar with the technology and comfortable with its use, and start to specify more sophisticated applications. That some PAC evidence said that clinician need had evolved is nonsense — we know then that these were the core needs. Anyway, we’re moving on to smartphone apps, and there is little evidence that the system can accommodate the wireless world of healthcare. The best selling clinical app is ePocrates, for drug information. How many clinicians have that app? How many clinicians are using smartphones? Distributed and simple systems can deliver often quite complex solutions; for instance, the Danish electronic prescribing system was built on simple secure emails.

The approach that was ignored at the time was this:

  1. specify common standards of interconnectivity and functionality, that is results;
  2. allow providers to use whatever system they wished as long as it met these requirements;
  3. allow the system to evolve over time as needs become better understood;
  4. start with the patients who are heavy users (high risk/high utilisation) and roll out from there.

That’s it.

Where the English NHS and Department also lost the plot was failing to exploit the NHS IT project to drive innovation into the IT sector to encourage the formation of a potentially world-class health IT industry in the UK. Is it any coincidence that the main solutions are from outside the UK and the critical supplier expertise betrayed North American origins?

This is a real shame, as once again the Department has shown antipathy toward enabling a commercially successful and innovative health supplier industry, in favour of mean-spirited control. This was perhaps the greatest missed opportunity, as instead, the Department came up with false logic of needing suppliers of scale (who are now quasi-monopolists).  Indeed, one member of the PAC did question whether CSC’s corporate logic was to make itself a monopoly supplier to the NHS.

The tragedy, too, is that virtually all the functionality that the NHS needs can be downloaded for free in the form of open source software.

Finally, the best thing the NHS and the Department could do is make sure all that intellectual property that has accumulated is given away, to try again to jump-start a health IT industry. If there is a value-for-money lesson the PAC could draw it is to determine whether there is sufficient residual value in the NHS IT procurement to be translated into investment in the economy, to build new suppliers to the NHS and perhaps the world. An opportunity awaits.

UPDATE

I thought I’d add reference to this diagram on distributed clinical systems. The copyright dates from 2002, a time when the PAC was told such capability didn’t exist. The diagram is taken from the OpenEHR website, which adds “Much of the current openEHR thinking on distributed computing environments in health is based on the excellent previous works of the (then) OMG Corbamed taskforce, and the Distributed Healthcare Environment (DHE) work done in Europe in EU-funded projects such as RICHE and EDITH, and the HANSA and PICNIC implementation projects.”  In those days, the UK’s NHS was still charting a proprietary, and non-standard, approach to EHRs and clinical systems; an example of one failed programme is the ‘common basic specification’ — there is an interesting commentary here on some reasons why it failed.

 

Diagram of a distributed clinical system, ca 2002

 

Enhanced by Zemanta

An Auditor of One

A surgical team from Wilford Hall Medical Cent...

An Auditor of One checking on surgical performance

The UK’s coalition government’s reform agenda continues to unfold with the planned scrapping of the Audit Commission. While the Commission has good analytical capacity and did focus on issues of importance, the need to shift the audit function further into systems and out into the community was not one of its core objectives.

In healthcare, I have written and spoken of the patient as the “auditor of one”, as the patient is the only person who has a real experience of the continuum of care, and it is only through the patient that the integration or not of services is achieved. While bureaucratic processes may try to knit systems together at their edges, only users have that ‘joined up experience’, and it is by engaging with them more effectively that radical service improvement will come about (the use is really the most disruptive force for quality improvement we have).

The next test for audit in the UK will be ensuring that all these auditors of one can be effective; rather unfortunately, the government is referring to them as “armchair auditors” a term which tends to describe distant interest, rather than engaged in the critical appraisal of performance. But organised interest groups can emerge, or existing one expand their scope of interest to increase the salience of issues in the delivery of publicly funded services.

I think one auditor is really enough anyway, but the National Audit Office will need to expand its remit in at least two areas if it is to be really worthy of public expectations, to include:

  1. value-for-money retrospective audits (and not just of assessing implementation against legislative intent);
  2. prospective audits of planned legislation (similar to the US non-partisan Congressional Budget Office).

I might add a third, namely being advised by, and engaging with, the public, perhaps through regional citizen audit advisory groups who can act to bring local concerns together where national concerns, at least, are an issue. There are models for this sort of relationship which would enhance accountability, transparency and visibility with the public.

Digital Risk: should health IT systems carry a health warning?

Arizona, poisonous snake warning sign.

Beware digital errors as they can bite

We all know accidents (unusual occurances in healthcare) can happen. Where systems are involved, errors can arise from how a system works, the way the various bits mesh, the knowledge and training of everyone involved working together.  It is no real surprise that some errors arise from the technologies that we use. In particular, health information technology systems can cause new types of errors and mistakes, beyond just not working properly.

In the US, the Health IT Policy Committee has proposed establishing a database to track potential safety risks related to IT systems.  These risks include:

  • hardware and software failure and bugs
  • workflow interactions between staff and users
  • interoperability problems
  • implementation and training deficits.

Since healthcare work is complex, the workflow risks are particularly complex and can arise from, for instance, inaccurately understanding how a manual system achieves its results, and thereby designing a software-based system that fails to do just that. There is a funny little thing that happens when a patient sees a doctor; the doctor often will use writing a prescription to terminate the patient encounter — tearing the piece of paper off the tab, a swirl of signature and handing the slip to the patient leads to the patient leaving, a neat way to end the consultation.

In an automated system (electronic prescribing, for instance), the consultation is not terminated in this behavioural manner, but involves essentially hitting the return key on the keyboard to enter the required prescription data in the system, and perhaps handing (or not) the patient a copy — but the Rx is off on electronic wings to the pharmacy for dispensing. There is an error that can occur if the doctor does not hit the return key between patients — the Rx list builds up, from patient to patient, until the return key gets hit (unless some sort of failsafe has been built in); this error actually happened and it was an alert pharmacist commenting to the patient that the doctor had added a lot of new drugs that the alarm was raised. Perhaps the patient should have been more distrustful, too.

We must be mindful of risk and error in any kind of technology, but particularly in systems where it is very hard to look inside the black box of software code.

I wrote a paper on digital risk some years ago, which can be found here: Patient Safety and Digital Risk. I have also raised the issue of risk in the even blacker box of predictive algorithms used to data mine record systems and profile risk of patients and this can be found here: Predictive Health. This second paper suggested that software may need to be subjected to comparable regulatory review like a medical device.

Just because you can’t drop it on your foot, doesn’t mean something can’t be dangerous.