BLOGSCAN - By Feb. all ER physicians at DePaul Health Center (MO) will be using scribes

At my Sept. 2010 post "The Ultimate Workaround To Mission Hostile Health IT: Humans (a.k.a. Scribes)" I had written that:

The EMR is a technology that was supposed to improve clinical medicine (revolutionize it, some say). It was supposed to facilitate clinical medicine. It was not supposed to slow physicians and others down to the point of impairing their ability to practice medicine. However, the rosy predictions are not proving to be the case. Instead, we have the ultimate workaround to the health IT mission hostile user experience: [the medical scribe].

On the HisTALK blog today, apparently one medical center agrees:

By February, all ER physicians at DePaul Health Center (MO) will be using scribes for electronic medical documentation. Administrators hope to improve staff productivity as well as patient satisfaction. Apparently patients were “annoyed” that doctors were sharing their attention with a computer.

In the cited article "Scribes are finding their place in emergency rooms" by Michele Munz, St. Louis Post-dispatch.com, Jan. 25, 2011:

"[With a scribe] I can just completely focus on and deal with the patient," [ED physician] Lebo said.

The emergency department at DePaul Health Center in Bridgeton, which sees about 60,000 patients a year, is the first to use scribes in the St. Louis area. By February, all the hospital's emergency physicians will have a scribe tapping away at a laptop or tablet computer while they work. Across the nation, about 200 hospital emergency departments have started using scribes, most within the last two years, according to the three major companies providing scribes.


Several points worthy of note:

  • The cost of scribes will be an issue affecting the supposed ROI of EHR's, and are necessitated by the fact that the EHR and its mission hostile nature get in the way of physicians. (See Dr. Doug Perednia's analysis at this link.)
  • You should not have to work around something that is not in the way.
  • That a technology touted as of extreme benefit now is seen to need an "isolation layer" between enduser and subject of the system's benefits is another example of how health IT remains an experiment.

-- SS

Orderless in Seattle: Software "glitch" shuts down Swedish Medical Center's medical-records system

A commenter yesterday commented that after 25 years in practice, they had one lost [paper] chart (as opposed to an IT systems crash, where every chart is lost temporarily).

As coincidence would have it, there's this story in the news:

Software glitch shuts down Swedish medical-records system
Tuesday, January 25, 2011
By Carol M. Ostrom
Seattle Times health reporter


A four-hour shutdown of Swedish Medical Center's centralized electronic medical-records system Monday morning was caused by a glitch in another company's software, said Swedish chief information officer Janice Newell.


There's that word "glitch" again that I see so frequently in the health IT sector when a system suffers a major crash that could harm patients. Why do we not call it a "glitch" when a doctor amputates the wrong body part, or kills someone?


The system, made by Epic Systems, a Wisconsin-based electronic medical-records vendor, turned itself off because it noticed an error in the add-on software, Newell said, and Swedish was forced to go to its highest level of backup operation.

Turned itself off? Back we go to the old Unix adage that "either you're in control of your information system, or it's in control of you."

To prove that point, note that "the highest level of backup operation" had a bit of a problem:


That allowed medical providers to see patient records but not to add or change information, such as medication orders.

I'm sure sick and unstable patients such as in the ICU's, as well as their physicians and nurses, appreciated this minor "glitch." Look, Ma, no orders!

(Do events like this ever happen in the middle of a Joint Commission inspection?)

The "glitch" didn't just affect a few charts:


The outage affected all of Swedish's campuses, including First Hill, Cherry Hill, Ballard and its Issaquah emergency facility, as well as Swedish's clinics and affiliated groups such as the Polyclinic.

I cannot imagine a paper-based "glitch" that could affect so many, so suddenly, other than a wide-scale catastrophe.


During the outage, new information was put on paper records [that 5,000 year old, obsolete papyrus-based technology that's simply ruining healthcare, according to the IT pundits - ed.] and transferred into patient records in the Epic system after the system went back up in the afternoon. [By whom? Busy doctors? - ed.] Epic, Newell said, is "really good at fail-safe activity," and if it detects something awry that could corrupt data, it shuts itself off, which it did Monday at about 10 a.m.


Which means that interfaced systems need to undergo the highest levels of scrutiny in real world use, if they can in effect shut down an entire enterprise clinical system.

I note the identity of the "other company's software" that brought the whole system to a grinding halt was not identified, nor was the nature of the "other vendor's" software "glitch" itself. Was the problem truly caused by "another vendor" via a bug in their product, via a faulty upgrade, or an internal staff error related to the "other vendor's" software?

It seems we now have yet another defense for HIT "glitches" other than "blame the users": it's not OUR fault; blame the other vendors.


Newell said the shutdown likely affected about 600 providers, 2,500 staffers and perhaps up to 2,000 patients, but no safety problems were reported.


As I've noted at this blog before, it is peculiar how such "glitches" never seem to produce safety problems, or even acknowledgments of increased risk.


Staff members were notified of the shutdown via error messages, e-mails, intranet, a hospital overhead paging system and personal pagers.


"Warning! Warning! EHR and CPOE down! Grab your pencils!" Just what busy doctors and nurses want to hear when they arrive for a harrowing day of patient care.

I wonder if the alert was expressed in a manner not understandable to patients, i.e., "Code 1100011" (99 in binary!) or something similar as in a medical emergency.


Newell said she was "99.9 percent sure" other hospitals have had similar shutdowns [that's certainly reassuring about health IT - ed.], because software, hardware and even power systems are not perfect. [That's why we have resilience engineering, redundancy, etc. - ed.]

"Anybody who hasn't had this happen has not been up on an electronic medical record very long," Newell said. "I would bet a year's pay on that."


A logical fallacy to justify some action or situation can take the form of an appeal to common practice. Is what I am seeing here what might be called an appeal to common malpractice?

Or is the fallacy simply a manifestation of the adage "misery loves company?"


Newell said this is not the first shutdown of Epic, which was fully installed in Swedish facilities in 2009 after a nearly two-year process. But it was the longest-running one, she acknowledged.

Gevalt.


Swedish is exploring creating "more sophisticated levels of backup" with other hospitals, Newell said, locating a giant server in a different geographic area to protect against various disasters such as earthquakes or floods.


Maybe they should have done that after the aforementioned other "glitches."

I repeat the adage:

"Either you're in control of your information system, or it's in control of you."

Indeed, if the information system is mission-critical, and you cannot control it, you literally have no business disrupting clinicians en masse and putting patients at risk by letting it control you.

Finally, on the topic of 'cybernetic extremophiles', I note that we have several Mars Rovers and very distant space probes such as Voyager 1 whose onboard computers (in the case of Voyager, built long ago with much less advanced technology than today's IT) have been working flawlessly in environments far more hostile than a hospital data center, and long beyond their stated life expectancies.

The Voyager 1 spacecraft is a 722-kilogram (1,592 lb) robotic space probe launched by NASA on September 5, 1977 to study the outer Solar System and eventually interstellar space. Operating for 33 years, 4 months, and 22 days, the spacecraft receives routine commands and transmits data back to the Deep Space Network. Currently in extended mission, the spacecraft is tasked with locating and studying the boundaries of the Solar System, including the Kuiper belt, the heliosphere and interstellar space. The primary mission ended November 20, 1980, after encountering the Jovian system in 1979 and the Saturnian system in 1980.[2] It was the first probe to provide detailed images of the two largest planets and their moons.

As of January 23, 2011, Voyager 1 was about 115.963 AU (17.242 billion km, or 10.8 billion miles) or about 0.00183 of a light-year from the Sun. Radio signals traveling at the speed of light between Voyager 1 and Earth take more than 16 hours to cross the distance between the two.


While these are exceptional case examples of resilience in IT systems far less complex than hospital IT, I believe healthcare can do better in terms of computer "glitches" affecting mission critical systems that are a bit closer than 10 billion miles away.

-- SS

More on "Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality"

I wrote earlier at "BLOGSCAN - Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality" that:

More on the primary article later. I'm busy managing the medical care of my relative's EHR-related 2010 injuries.

Now that I have some time, here are my thoughts on the article, and on a critique of that article published at the same time.

The article itself is at:

Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality
Max J. Romano, BA; Randall S. Stafford, MD, PhD
Arch Intern Med. Published online January 24, 2011. doi:10.1001/archinternmed.2010.527

and the critique is at:

Clinical Decision Support and Rich Clinical Repositories: A Symbiotic Relationship: Comment on "Electronic Health Records and Clinical Decision Support Systems"
Clement McDonald and Swapna Abhyankar
Arch Intern Med. 2011;0(2011):20105181-2
[note - I know Dr. McDonald personally - ed.]

I restrict my comments to commercially available healthcare IT from traditional for-profit health IT merchants. These comments may not apply, or may not apply as directly, to open source EMR's (such as VistA and VistA-based products e.g., WorldVista).

First, I find the article's first major result not very surprising:

In only 1 of 20 indicators was quality greater in EHR visits than in non-EHR visits (diet counseling in high-risk adults, adjusted odds ratio, 1.65; 95% confidence interval, 1.21-2.26)

This is consistent with my belief that the primary problems in healthcare quality are not in the domain of record keeping, whether paper or computerized. Thus, as I wrote at "Is Healthcare IT a Solution to the Wrong Problem?", EMR's alone are a 'solution to the wrong problems.' They may solve bookkeeping issues, but they don't help clinicians, and in fact probably impair more than aid them due to the mission hostile user experience they often present.

The second major conclusion is more debatable:

Among the EHR visits, only 1 of 20 quality indicators showed significantly better performance in visits with CDS [clinical decision support -ed.] compared with EHR visits without CDS (lack of routine electrocardiographic ordering in low-risk patients, adjusted odds ratio, 2.88; 95% confidence interval, 1.69-4.90)

This result is more debatable due to the nature of, and limitations within, the study. CDS "done well" (two simple words behind which lay massive, perhaps wicked problem-level sociotechnical complexity) might actually improve guideline adherence by ambulatory care physicians.

The major challenge is in "doing it well" (both EHR and CDS) and getting the good data required to do CDS well under the substantial impediments posed by a dysfunctional health IT ecosystem as I wrote about here, and an oppressive environment for medical practitioners forcing them to "do more in less time" in the interests of money, often due to governmental interference in care.

These observations suggest that before tackling national EMR and CDS, we should be tackling the dysfunctions of the health IT industry and ecosystem.

The article did utilize some of the best data available to researchers:

DATA SOURCE

We used the most recent data available from the National Ambulatory Medical Care Survey (NAMCS, 2005-2007) and the National Hospital Ambulatory Medical Care Survey (NHAMCS, 2005-2007), both conducted by the National Center for Health Statistics (NCHS, Hyattsville, Maryland). These surveys gather information on ambulatory medical care provided by nonfederal, office-based, direct-care physicians (NAMCS)21 and provided in emergency and outpatient departments affiliated with nonfederal general and short-stay hospitals (NHAMCS).22 These federally conducted, national surveys are designed to meet the need for objective, reliable information about US ambulatory medical care services.23 These data sources have been widely used by government and academic research to report on patterns and trends in outpatient care.

I don't believe the findings can be challenged on the basis of faulty data.

The quality of care indicators chosen also were well thought out:

QUALITY-OF-CARE INDICATORS

Our analysis of quality of care used a selected set of 20 quality indicators that had previously been used to assess quality using NAMCS/NHAMCS26 but that had been updated to reflect changes in clinical guidelines. Each indicator represents a care guideline whose adherence can be measured using the visit-based information available from NAMCS/NHAMCS visit records. The indicators were developed using broad criteria established by the Institute of Medicine (clinical importance, scientific soundness, and feasibility for indicator selection) and specific criteria based on the NAMCS/NHAMCS data sources. The indicators fall into 5 categories: (1) pharmacological management of common chronic diseases, including atrial fibrillation, coronary artery disease, heart failure, hyperlipidemia, asthma, and hypertension (9 indicators); (2) appropriate antibiotic use in urinary tract infection and viral upper respiratory infections (2 indicators); (3) preventive counseling regarding diet, exercise, and smoking cessation (5 indicators); (4) appropriate use of screening tests for blood pressure measurement, urinalysis, and electrocardiography (3 indicators); and (5) inappropriate prescribing in elderly patients (1 indicator).

It should be recalled that this article is in many ways a follow up to the article "Electronic Health Record Use and the Quality of Ambulatory Care in the United States (Arch Intern Med. 2007;167:1400-1405, link to abstract here). The article's authors:

... performed a retrospective, cross-sectional analysis of visits in the 2003 and 2004 National Ambulatory Medical Care Survey. We examined EHR use throughout the United States and the association of EHR use with 17 ambulatory quality indicators. Performance on quality indicators was defined as the percentage of applicable visits in which patients received recommended care.

That article's authors reached what to many was a counterintuitive conclusion. The authors examined electronic health records (EHR) use throughout the U.S. and the association of EHR use with 17 basic quality indicators. They concluded that “as implemented, EHRs were not associated with better quality ambulatory care.” (To medical informaticists, the key phrase that explains these findings is “as implemented”, to which I would also add “as designed”, i.e., badly.)

In the latest article, obvious confounding variables appear to have reasonably been taken into account:

Performance on each quality indicator was defined as the proportion of eligible patients receiving guideline-congruent care so that a higher proportion represents greater concordance with care guidelines. Attention was paid to excluding those patients with comorbidities that would complicate guideline adherence (eg, asthma in assessing the use of β-blockers in coronary artery disease). Also, in some instances, care was adherent to the quality indicator if a similar therapy was provided (eg, warfarin rather than aspirin in coronary artery disease).

With regard to the authors' comments:

In a nationally representative survey of physician visits, neither EHRs nor CDS was associated with ambulatory care quality, which was suboptimal for many indicators.

As I mentioned, the first part of this statement represents a more solid a conclusion than the second.

However, one must ask why the first conclusion (EHR's without CDS not associated with better ambulatory care quality) might be so.

  • Was the study flawed in some way? As per the previous paragraphs, I don't think so.
  • Were factors in the real world clinical environment not amenable to cybernetic intervention responsible? This seems likely, such as harried and/or poorly trained physicians, pressured visit time limitations, and other factors that make EHR's a band aid at best.
  • Were EHR's a solution to the wrong problem? If documentation issues are not a significant factor in ambulatory care quality (physicians and patients do speak to one another, after all), then EHR's would not be expected to have much additional impact. This also seems likely.
  • Were the EHR's suboptimal? Mission hostile IT certainly should not be expected to have a large positive effect on the behavior of users.

Regarding the finding that even EHR's with CDS do not make a significant difference in compliance with treatment guidelines, one might also ask (as the authors of the critique did) if:

  • the CDS of the clinical IT in use did not cover the indicators measured. The indicators, however, are not particularly unusual or esoteric. It would therefore surprise me if there were few or no major intersections. (If this was the case, it would speak poorly of the commercial HIT merchants and their products.)
  • The CDS implementation itself were mission hostile, making it difficult for users to carry out the recommendations in their harried, time limited visits.
Either issue goes back to my point about correcting the IT ecosystem before rolling out this technology at a cost of hundreds of billions of dollars.

The authors state:

While our findings do not rule out the possibility that the use of CDS may improve quality in some settings, they cast doubt on the argument that the use of EHRs is a "magic bullet" for health care quality improvement, as some advocates imply.

Yes, right up to the POTUS and the HHS ONC office such as:

... The widespread use of electronic health records (EHRs) in the United States is inevitable. EHRs will improve caregivers’ decisions and patients’ outcomes. Once patients experience the benefits of this technology, they will demand nothing less from their providers. Hundreds of thousands of physicians have already seen these benefits in their clinical practice. (ONC Chair Blumenthal in the NEJM).

and this:
“We know that every study and every professional consensus process has concluded that electronic health systems strongly and materially improve patient safety. And we believe that in spreading electronic health records we are going to avoid many types of errors that currently plague the healthcare system,” Blumenthal said when unveiling new regulations in Washington on July 13.

As I wrote at "Huffington Post Investigative Fund: FDA, Obama Digital Medical Records Team at Odds over Safety Oversight", no, we don't know that. The assertion about "every study and consensus process" is demonstrably false.

I think anyone still believing in IT as any type of "magic bullet" needs to be disqualified from involvement in healthcare.

On a matter of my own critique of the article, there's this:

Several anecdotal articles describe how CDS can disrupt care and decrease care quality; however, further empirical research is needed.35-36 In the absence of broad evidence supporting existing CDS systems, planned investment should be monitored carefully and its impact and cost evaluated rigorously.

As in my posts "The Dangers of Critical Thinking in A Politicized, Irrational Culture", "EHR Problems? No, They're Merely Anecodotal" and "Health IT: On Anecdotalism and Totalitarianism", there's that shibboleth term "anecdotal" again.

The "anecdotal" articles do not include the work of Koppel, Hahn or others who did extensive empirical research and found major problems and increased risks of error created by CPOE and bar coding, to name two variations of clinical IT thought to be "slam dunks" for improved care delivery. Further, the "anecdotes" of HIT malfunction in sites such as the FDA's MAUDE database are alarming to me due to the obvious risk to patients they reflect - whether injuries occurred or not (there is a patient death account in MAUDE as well), but apparently not to the academic community or ONC, which is chaired by an academic:

http://www.massdevice.com/news/blumenthal-evidence-adverse-events-with-emrs-anecdotal-and-fragmented

... [Blumenthal's] department is confident that its mission remains unchanged in trying to push all healthcare establishments to adopt EMRs as a standard practice. "The [ONC] committee [investigating FDA reports of HIT endangement] said that nothing it had found would give them any pause that a policy of introducing EMR's could impede patient safety," he said. (Also see http://hcrenewal.blogspot.com/2010/05/david-blumenthal-on-health-it-safety.html).

It has occurred to me that this "what - me worry?" polyanna attitude may reflect an academic bias or over-zealotry regarding peer review and retrospective event descriptions - as opposed to risk.

By way of career history, I often lunched with the Director of System Safety of the regional transit authority I once worked for, and accompanied him on site visits; it was amazing how fast he could identify potential risks in sites we visited, both within and outside the authority (e.g., external drug testing laboratories). He identified risks based on personal expertise and experience, and did not seek peer review of his assessments. He sought (and received) action.

He thought, and I, partially as a result of this exposure, think proactively in terms of risk, not retrospectively in terms of confirmed, peer reviewed accident reports. On that basis, the 1999 appearance of my site on health IT problems and the commentary initially received, still online at this link, should have resulted in significant concern from the HIT academic community, were it not for the aforementioned "lensing effect" of their stations in the domain. Its being passed off as of little more than "anecdote" (a critique I still hear about the modern site) was a disappointment for someone of my heterogeneous background.

I expressed my views on this issue in a comment:

I am quite fed up with the positivist-extremist academic eggheads whose views are so beyond common sense regarding 'anecdotes' of health IT harm from qualified users and observers, that they would find 'anecdotal stories' of people being scalded when opening hot car radiators as merely anecdotes, and do likewise.

These people have been part of the crowd that's led to irrational exuberance on health IT at a national level.

As Scott Adams put it regarding the logical fallacy known as:

IGNORING ALL ANECDOTAL EVIDENCE

Example: I always get hives immediately after eating strawberries. But without a scientifically controlled experiment, it's not reliable data. So I continue to eat strawberries every day, since I can't tell if they cause hives.

I think that summarizes the "academic lensing effect" regarding risk of HIT.

I can also critique the near lack of mention of EHR quality issues:

At the same time, our findings may suggest a need for greater attention to quality control and coordinated implementation to realize the potential of EHRs and CDS to improve health care.

My comment to this is: you don't say? As I've often written, healthcare will not be reformed until health IT itself is reformed.

Finally, on the published critique of the article, I find this passage remarkable:

Regardless of the differences, we know from multiple randomized controlled trials that well-implemented CDS systems can produce large and important improvements in care processes. What we do not know is whether we can extend these results to a national level. The results of Romano and Stafford's study suggest not. However, we suspect that the EHR and CDS systems in use at the time of their study were immature, did not cover many of the guidelines that the study targeted, and had incomplete patient data; a 2005 survey of Massachusetts physicians supports this concern.5 On the other hand, we are not surprised that EHRs without CDS do not affect guideline adherence, because without CDS, most EHRs function primarily as data repositories that gather, organize, and display patient data, not as prods to action.

It's remarkable in several aspects:

  • First, the refrain of "immature systems" or, expressed more colloquially, a "versioning problem" seems to come up when health IT is challenged. One survey of physicians in one state aside, the fact that more fundamental issues of health IT fitness for purpose and usability are usually not mentioned is striking, as here.
  • If the systems were indeed "immature" just several years ago, this speaks poorly for the health IT merchants and their products (as well as the buyers), and back we go to the issue of remediating the HIT ecosystem before we attempt to remediate medicine with its products.
  • The final statement about EHR's lacking CDS not being "prods to action" raises the question: why were extraordinary claims made for EHR's in the past several decades, and why were so many organizations buying EHR's without equally extraordinary evidence?

Regarding the following assertion in the critique:

Although EHRs without CDS may not improve adherence to clinical guidelines, they are (1) a necessary precondition for having CDS (without electronic data, there can be no electronic support functions); (2) valuable for maintaining findable, sharable, legible, medical records; and (3) when they are amply populated (ie, they contain at least 1 or 2 years of dictations, test results, medications, and diagnoses/problems), physicans love them because there are no more lost charts or long waits on the telephone for laboratory results.

I make the following observations:

  • Re (1): could extra support staff markedly increase "decision support" at the point of care using paper methods, at a fraction of the cost of health IT?
  • Re (2): legible, yes; useful, perhaps not. The near 3000 pages generated by a fraction of my relative's long hospitalization due to HIT-related injury, covering only the first two and a half week hospitalization, were very legible. Very legible; very filled with legible gibberish, unfortunately; and very useless to most humans needing to review her case and render additional care. This problem once again goes to the need to address problems within the HIT ecosystem.
  • Re: (3) I would like a reference for the statement about "physicians loving their EHR." Due to the extra efforts and expense involved, it would seem even under the best of circumstances to be a love-hate relationship. Surveys like this one (link) support that notion.

Still more on physicians "loving" EHR's:

Survey: Docs Skeptical of EHRs, Hate Reform

Health Data Management, January 20, 2011

A recent survey of nearly 3,000 physicians shows high levels of displeasure with the Affordable Care Act--and a lot of them don't like electronic health records either.

Of the 2,958 physicians surveyed in September, only 39 percent believe EHRs will have a positive effect on the quality of patient care. Twenty-four percent believe EHRs will have a negative effect on quality, and 37 percent forecast a neutral factor.

HCPlexus, publisher of the The Little Blue Book reference guide for physicians, developed and conducted the survey with content vendor Thomson Reuters. The survey sample came from physicians in HCPlexus' database. The fax-based survey was done in September 2010, with additional information directly gathered via phone or e-mail from hundreds of the surveyed physicians in December and January.

In conclusion, the new Archives article represents yet another data point challenging uncritical assertions of automatic EHR-created medical improvement.

I agree with both the article's authors and those who wrote the critique that more reserch is needed (not more fast-paced implementation).

As concluded in 2009 by the National Research Council in a study led by several HIT pioneers:

Current efforts aimed at the nationwide deployment of health care information technology (IT) will not be sufficient to achieve medical leaders' vision of health care in the 21st century and may even set back the cause ... In the long term, success will depend upon accelerating interdisciplinary research in biomedical informatics, computer science, social science, and health care engineering.

These words should not be ignored.

-- SS

BLOGSCAN - Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality

From the blog of Dr. Sanjay Gupta at CNN health:

Electronic health records no cure-all

Electronic medical records, also known as EHRs, often touted as a powerful antidote for uncoordinated and ineffective medical care, do little to help patients outside the hospital, according to a new study.

Researchers from Stanford University analyzed federal data on more than 255,000 patients, about a third of whom had health information carried electronically. The researchers compared the care of those patients to the care of patients without EHRs, on 20 different measures of quality – for example, whether proper medication was prescribed for patients with asthma or simple infections, or whether smokers were counseled on ways to quit. On 19 of the 20 measures, there was no benefit from having an EHR. The one exception was dietary advice: Patients at high-risk for illness were slightly more likely to receive counseling on a proper diet.

The U.S. Department of Health and Human Services has pushed hard to encourage the adoption of electronic medical records, including $19 billion worth of incentives for doctors and hospitals. A move to EHRs is one of the less controversial aspects of health care reform, and the shift is often touted by President Obama.

But skeptics say there are serious risks to an overreliance on EHRs, from missing information to simple computer crashes. A report last month from the ECRI Institute, a respected organization that studies science and health issues, listed “data loss and system incompatibilities” as one of ten “Top 10 Health Technology Hazards for 2011.”

Jeffrey C. Lerner, president and chief executive officer of the ECRI Institute, said the new findings are no surprise. "It is reasonable to assume that electronic health records will ultimately help the cause," he told CNN in an email, "but new technology has a learning curve. [Somehow, this "new technology" that dates back decades is having one hell of a long learning curve - ed.] Think of your smart phone. Improving quality will remain a tough challenge, but avoiding technology use doesn’t sound like an alternative.”

To examine whether better technology might help, the Stanford team also looked at whether care was better when physicians used a computer system to help guide them through treatment options. It barely made a difference.

The project was started by Max Romano, an undergraduate at the time who now studies medicine at Johns Hopkins University. The final paper was co-written with Dr. Randall Stafford, a professor at the Stanford Prevention Research Center.

"Our initial hope was that we would see a correlation between electronic health records and quality, and when we looked at the subset of patients whose doctors got help from the clinical decision support systems [decision-making software], we'd see an even stronger relationship," says Stafford. "Perhaps we need to re-examine the naive assumption that just putting in place an EHR system will make a huge difference." [That's called "technological determinism" - ed.]

While praising federal efforts to standardize and streamline EHRs, Stafford said the findings raise serious questions about the scope and speed of the $19 billion campaign. "There is a need to question investing that much societal resource in electronic health records when we really don’t know the answer of what effects those are going to have. Having made that decision, it's incumbent for us to demand exactly what we have gotten out of the investment."


Once again, as I pointed out here, you've seen these questions raised long ago at Healthcare Renewal and at a much older site on health IT difficulties the authors may not be aware of, specifically here.

The CNN post describes results reported online in the Archives of Internal Medicine:

Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality

Max J. Romano, BA; Randall S. Stafford, MD, PhD
Arch Intern Med. Published online January 24, 2011. doi:10.1001/archinternmed.2010.527

More on the primary article later.

I'm busy helping managing the medical care of my relative's 2010 EHR-related injuries.

-- SS

Addendum:

My followup post on the primary article is here.

Blinded Wheat Challenge

Self-experimentation can be an effective way to improve one's health*. One of the problems with diet self-experimentation is that it's difficult to know which changes are the direct result of eating a food, and which are the result of preconceived ideas about a food. For example, are you more likely to notice the fact that you're grumpy after drinking milk if you think milk makes people grumpy? Maybe you're grumpy every other day regardless of diet? Placebo effects and conscious/unconscious bias can lead us to erroneous conclusions.

The beauty of the scientific method is that it offers us effective tools to minimize this kind of bias. This is probably its main advantage over more subjective forms of inquiry**. One of the most effective tools in the scientific method's toolbox is a control. This is a measurement that's used to establish a baseline for comparison with the intervention, which is what you're interested in. Without a control measurement, the intervention measurement is typically meaningless. For example, if we give 100 people pills that cure belly button lint, we have to give a different group placebo (sugar) pills. Only the comparison between drug and placebo groups can tell us if the drug worked, because maybe the changing seasons, regular doctor's visits, or having your belly button examined once a week affects the likelihood of lint.

Another tool is called blinding. This is where the patient, and often the doctor and investigators, don't know which pills are placebo and which are drug. This minimizes bias on the part of the patient, and sometimes the doctor and investigators. If the patient knew he were receiving drug rather than placebo, that could influence the outcome. Likewise, investigators who aren't blinded while they're collecting data can unconsciously (or consciously) influence it.

Back to diet. I want to know if I react to wheat. I've been gluten-free for about a month. But if I eat a slice of bread, how can I be sure I'm not experiencing symptoms because I think I should? How about blinding and a non-gluten control?

Procedure for a Blinded Wheat Challenge

1. Find a friend who can help you.

2. Buy a loaf of wheat bread and a loaf of gluten-free bread.

3. Have your friend choose one of the loaves without telling you which he/she chose.

4. Have your friend take 1-3 slices, blend them with water in a blender until smooth. This is to eliminate differences in consistency that could allow you to determine what you're eating. Don't watch your friend do this-- you might recognize the loaf.

5. Pinch your nose and drink the "bread smoothie" (yum!). This is so that you can't identify the bread by taste. Rinse your mouth with water before releasing your nose. Record how you feel in the next few hours and days.

6. Wait a week. This is called a "washout period". Repeat the experiment with the second loaf, attempting to keep everything else about the experiment as similar as possible.

7. Compare how you felt each time. Have your friend "unblind" you by telling you which bread you ate on each day. If you experienced symptoms during the wheat challenge but not the control challenge, you may be sensitive to wheat.

If you want to take this to the next level of scientific rigor, repeat the procedure several times to see if the result is consistent. The larger the effect, the fewer times you need to repeat it to be confident in the result.


* Although it can also be disastrous. People who get into the most trouble are "extreme thinkers" who have a tendency to take an idea too far, e.g., avoid all animal foods, avoid all carbohydrate, avoid all fat, run two marathons a week, etc.

** More subjective forms of inquiry have their own advantages.