The past few years, mostly because of the Affordable Care Act, the adoption of Electronic Health Record (EHR) systems in the USA has seen a dramatic growth. Because of the rapid climb, EHR vendors have been trousering some pretty large amounts of revenue, billions of dollars, in fact. This is not a bad thing per se’, but as Congress suddenly realized this year, all that cash didn’t translate into giant leaps in innovation, as they predicted. Some of this is the result of a captive market, some because of psychosocial artifacts of clinicians, and some to do with that markets aren’t necessarily innovative.
One of the ways in which one can see the lack of innovation, or even basic maturity, is the degree to which clinicians have to type the same data over and over in different electronic forms. Not only do the EHR systems not interoperate very well between vendors, some don’t even interoperate with themselves! So it is a common sight to see a nurse type in records from a sheet of paper, then if they are lucky, copy and paste them into another form. If they are unlucky, they get to retype the same data multiple times in different EHR screens. If they are doubly unlucky, the system is also somewhat fragile, which isn’t unusual, and it aborts the session before the data is saved. In that case, they get to retype it all again when the system comes back to life. Sometimes this happens several times a day – in one case that I encountered, the clinician had to try fourteen times before the system recorded the data!
This is obviously a pretty abominable situation, and to get even the most basic degree of workflow into this is going to take a lot of effort and money. Luckily, the EHR vendors are flush and positively glowing pink with all that Meaningful Use cash in their fists.
What I want to see isn’t beyond current technology or in the realm of science fiction, and not even where we ultimately want to be, but it shows where the thinking needs to head (In my opinion, that is).
What I want to see is the removal of the human from any data capture that doesn’t actually require their expertise.
Not really a big ask, given that we can put intelligence in spectacles and the average smartphone has more brains than it knows what to do with.
So let’s say a patient arrives for a consultation.
When they enter the waiting room, I want them to get a transponder sticker. These are dirt cheap, pretty reliable, and can be scanned without actual contact. At the reception desk, the clerk reads the sticker and associates it with the patient record. Now I can tally who left without being registered (elopement), how long it took (primary wait time), and at which stage of the encounter all the patients are (census).
When the patient is called, they are read leaving the waiting room, and again when they enter the examination room. The nurse or nurse practitioner scans them, and the patient record is already onscreen in the room when the nurse scans their ID on the workstation. Each vital sign collected goes directly into the patient record because the instruments are vaguely intelligent. Blood pressure, pulse-oximetry, weight, height, respirations, temperature, etc. are all directed from the device to the EHR simply by using them on the patient. These are all time-stamped, have the ID of who was using them, the ID of the device, and are shown as machine entries in the patient record.
Verbal notes can already be captured through speech recognition, but let’s say that the nurse actually has to enter this themselves. They don’t have to search for the patient record or the screen, those are already there, and they simply need to verify that the patient record is correct. (Although unless the patient swapped armbands with somebody, we are pretty sure who they are).
When the process has reached a certain point, the EHR can buzz the physician that the patient is close to ready. So no long wait while the nurse has to write things down or type in much, and no need for them to go find the physician.
A similar scenario unfolds when the physician enters: the room, patient, and physician are associated in an entry event because all three have transponder identities. Relevant patient data is already displayed when the physician scans their ID at the workstation to login, and again, any use of instruments captures data. Listening to the patients lungs with an intelligent stethoscope can capture the sounds, timestamp them, and put them into the correct place in the patient’s record. Even more wonderful, if the patient has any electronic records pertinent to the encounter, these can be transmitted from a smartphone Personal Health Record (PHR) app.
The only parts the physician play in capturing data is when expertise is required or when the machines can’t (yet) do it themselves. There is no reason on earth why a scale, blood pressure cuff, or pulse-oximetry device can’t transfer the data to the EHR themselves. Only the most antiquarian of medical offices don’t already have devices that display the data digitally, it’s just that we then typically ask a human to write it down or type it into the EHR manually. That is a bad use of resources, and opens up opportunities to get it wrong.
With time stamped machine data, the practice can start monitoring movement and wait times, and would be enabled to make adjustments to their workflow to optimize patient flow, and reduce unnecessary steps or waits. Staffing rosters and equipment placement can be evidence based rather than rely on guesswork, and bottlenecks in the processes will be far more visible.
The basic theory is similar to industrial engineering – don’t ask a human to do something that the machine can do. Free up clinician time, reduce transcription errors, and allow the clinician to focus on where their expertise lies – not in being low-level data capture clerks.
We should be demanding that equipment manufacturers and EHR vendors get their act together, and stop making clinicians do their dirty work.
That’s my story, and I’m sticking to it!