Tag Archives: Computing

Digital Risk: should health IT systems carry a health warning?

Arizona, poisonous snake warning sign.

Beware digital errors as they can bite

We all know accidents (unusual occurances in healthcare) can happen. Where systems are involved, errors can arise from how a system works, the way the various bits mesh, the knowledge and training of everyone involved working together.  It is no real surprise that some errors arise from the technologies that we use. In particular, health information technology systems can cause new types of errors and mistakes, beyond just not working properly.

In the US, the Health IT Policy Committee has proposed establishing a database to track potential safety risks related to IT systems.  These risks include:

  • hardware and software failure and bugs
  • workflow interactions between staff and users
  • interoperability problems
  • implementation and training deficits.

Since healthcare work is complex, the workflow risks are particularly complex and can arise from, for instance, inaccurately understanding how a manual system achieves its results, and thereby designing a software-based system that fails to do just that. There is a funny little thing that happens when a patient sees a doctor; the doctor often will use writing a prescription to terminate the patient encounter — tearing the piece of paper off the tab, a swirl of signature and handing the slip to the patient leads to the patient leaving, a neat way to end the consultation.

In an automated system (electronic prescribing, for instance), the consultation is not terminated in this behavioural manner, but involves essentially hitting the return key on the keyboard to enter the required prescription data in the system, and perhaps handing (or not) the patient a copy — but the Rx is off on electronic wings to the pharmacy for dispensing. There is an error that can occur if the doctor does not hit the return key between patients — the Rx list builds up, from patient to patient, until the return key gets hit (unless some sort of failsafe has been built in); this error actually happened and it was an alert pharmacist commenting to the patient that the doctor had added a lot of new drugs that the alarm was raised. Perhaps the patient should have been more distrustful, too.

We must be mindful of risk and error in any kind of technology, but particularly in systems where it is very hard to look inside the black box of software code.

I wrote a paper on digital risk some years ago, which can be found here: Patient Safety and Digital Risk. I have also raised the issue of risk in the even blacker box of predictive algorithms used to data mine record systems and profile risk of patients and this can be found here: Predictive Health. This second paper suggested that software may need to be subjected to comparable regulatory review like a medical device.

Just because you can’t drop it on your foot, doesn’t mean something can’t be dangerous.