Skip to main content

Advertisement

ADVERTISEMENT

Original Contribution

Dusting for Fingerprints

Paul Misasi, MS, NREMT-P
November 2013

In 1989 British Airways 747 Captain Glen Stewart became the first commercial pilot to face criminal penalties for jeopardizing his passengers’ lives.1,2

Faced with bad weather, problems with the autopilot, a stronger than anticipated headwind, two of three crew members suffering from an acute gastrointestinal illness, a last minute runway change, critical fuel status and a first officer assisting with a complicated approach that he had not been endorsed to fly (for which he was granted an in-flight dispensation), Capt. Stewart was unable to land successfully on the first attempt and nearly missed impacting a hotel that was obscured by fog. The crew did, however, manage to land the aircraft safely on the second attempt.

Capt. Stewart reported the mishap and was subsequently brought up on criminal charges, demoted and fined. The airline managed to save face by vilifying Capt. Stewart, claiming he acted unprofessionally and that his flying skills had been deteriorating for some time. Three years after the incident, he committed suicide.1,2 Responding to the airline’s argument that Capt. Stewart endangered the lives of his passengers, a fellow pilot jabbed at the airline, stating “We do that every time we fly!”

Just like airline pilots, EMS professionals work in complex, high-risk, safety-critical, socio-technical systems.3-6 And if we come to understand the nature of this complexity, we can view our profession through a new lens to craft better solutions for improvements in patient and provider safety. Because the price of failure is paid with the morbidity and mortality of our patients and coworkers, it is imperative that we dig deeper, beyond human attributions, beyond the “bad apple theory,” to understand failure and guard against it. The purpose of this article is to help us do just that—to reinforce the foundation of a safety culture with essential concepts and tenets.

A Safety-Critical Endeavori

A safety-critical business is one that operates in the face of the following challenges:1, 3-9

  1. Time compression: It is important to act swiftly and effectively, and failure to act can be harmful.
  2. Complex dynamics: Situations change for reasons that are not always clear to those making decisions and frequently the information available for basing those decisions is incomplete, inaccurate and untimely. In some situations, the necessary information is only made available by way of an intervention.
  3. Cognitive challenges: A person may be tasked with managing a situation or conditions they have little experience with or exposure to, and existing guidelines (i.e., protocols, policies and procedures) are not—nor can they be—exhaustive, covering every eventuality of field work.
  4. Dangerous variables: A large number of variables in a situation can be difficult to predict and control, and may simply be uncontrollable and volatile.
  5. Unruly technology: Specialized equipment can obscure processes and information necessary to make decisions, occupy the limited mental resources necessary for problem solving, shift provider’s focus from managing the overall situation to managing devices (a concept known as dynamic fault management), promote the construction of inaccurate mental models and complicate the problem solving process.

Modern healthcare has been described as “the most complex activity ever undertaken by human beings”3 with many parallels drawn to the nature of military combat. Viewed in this light, it’s a wonder we ever get anything right! Yet as consumers, managers and practitioners, we still expect perfection: perfect knowledge, perfect execution, perfect performance, no excuses! Unfortunately, there is a tremendous price to be paid for our expectation of perfection.13,14

Practical Drift and Goal Conflict

Confounding the issues listed above are conflicts of personal and organizational values. Operators at the sharp end are told to “make safety your highest priority” and “follow the rules,” then in the other ear are told “don’t cost us money,” “get it done faster,” “keep people happy,” “do it with less,” “we have to (or are being forced to) cut resources,” and “make do with what you have.” Managers demand employees to be safe and provide quality care to their patients, but often measure their performance by expediency. Tacitly, providers may feel an unspoken pressure to trade accuracy and safety for speed and efficiency, a concept known as “practical drift.”9,13,14,18 It is truly a testament to the caliber of out-of-hospital care providers whose role it is to resolve ambiguity and carefully achieve the organizational mission in spite of these conflicting priorities. But if we (managers and providers) believe we need only rely on providers with the capacity to overcome system limitations (i.e., the “good apples”) our systems will be brittle.

Because of increasing demand and decreasing resources, managers are faced with a dilemma: maintain the quality (and safety) of services, effectively decreasing its availability, or operationalize whatever margin of safety that has been achieved through improved processes, procedures, efficiencies, equipment and technology.5,9 Consider stretcher weight limits as an example: in order to accommodate patients who exceed our stretchers 500 lb. weight limit, we design stretchers to hold up to 700 lbs., except now we try to use it for an 800 lb. patient, who we never would have tried to place on the old stretcher. Every decremental step away from safety makes strategic sense—it solves a problem. Unfortunately, the distance we “drift” is only seen in the light of a rare and apparently isolated catastrophic failure (i.e., the 800 lb. patient we dropped off the cot is someone we should not have been trying to move in the first place).

Safety is Emergent, It is Not Inherent.

Safety is a “dynamic non-event” created by people; it is not an inherent property of the system. It emerges as a product of decisions made by frontline operators who strike a balance between competing goals, achieving the mission and holding together a “pressurized patchwork” of systems, processes and technologies.1,4,6,7,9,11,14,15,19,20 Murphy’s Law is commonly known as the phrase “whatever can go wrong, will,” but in a safety-critical organizations like EMS, this law is arguably incorrect, or at least incomplete.7 In EMS what can go wrong usually goes right because providers make it go right, where the system alone (as designed) would have come up short.

Frontline providers invest themselves into understanding the margins of their abilities, the capabilities of their equipment, the nature of the environment, their partners and other responders so they know how far they can allow a situation to safely stretch limits, and intervene when necessary. The occasional human contribution to failure occurs because complex systems[i] need an overwhelming human contribution for their safety.7

“Cause” is a Socially Constructed Artifact

When something goes wrong though, it is tempting to simply dust for fingerprints and look for the bad apple. This approach to failure generally consists of a manhunt, followed by organizational discipline, license forfeiture and financial penalties; in some cases providers are branded criminally negligent and have even lost their freedom.1,10 We quickly identify an individual’s failure, conclude their guilt and convince ourselves it would not have happened if it wasn’t for them! We defame them for their complacency, their ignorance, incompetence and, perhaps, for being just plain lazy. Those who espouse this approach, known as the Old View or the Bad Apple theory,1,7 are essentially arguing that their systems are perfectly safe and failure is an aberrancy—that errors, mistakes and mishaps really have nothing in common except for those lazy, self-centered people at the sharp end. Thus there is nothing more fundamental, complex, time intensive or expensive that we must invest resources into changing.1,7 This approach is popular, fast and painless (for those handing out the reprimands) and there’s an added bonus: it saves system administrators, the public and politicians from having to consider that they may share in the responsibility for the outcome. This approach doubles as a nice personal defense mechanism for providers as well, because we can use it to shield ourselves from realizing that we too are vulnerable to error21—e.g., Fred gave an overdose of Fentanyl. Fred is an idiot and a lousy paramedic. I’m not a lousy paramedic therefore that will never happen to me. I’m safe from making that mistake. Cause is something we construct, not something we find.7 It is a social judgment fraught with biases and where we choose to lay blame ultimately says more about us than it does about the nature of the incident in question.

The Processes of a System are Perfectly Designed to Produce the Current Results.20

Processes do not understand goals; they can only produce what they were designed to produce, regardless of what the goal is. If a process is not meeting a goal, all the inputs (machines, methods, materials, data, environment, etc.) must be considered, not just the people. People are but a small fraction of the things that influence processes and their outcomes. Variability, including errors are built into the system, yet we continue to treat errors as if there was some special cause. Research has demonstrated that a significant majority of managers and peers will stop their investigation of an incident once the person “at the wheel” has been identified and punished.14

Perhaps the best and simplest way to determine whether problems or errors are built into the system is to ask yourself the following question: “If we fired and replaced everyone today, could this problem happen again tomorrow?” If your answer is yes, then what you have is not a bad apple problem, and it is time to start asking why.

Improvement = Accountability

Another significant incentive for sticking with the Bad Apple theory is that it gets organizations out from under the microscope (or “retrospectoscope”) of legal and public scrutiny. When we are faced with a lynch mob biased by hindsight, we can identify those people who left their fingerprints on the wreckage (literally or figuratively), punish them, dust off our hands and tell the public it will never happen again because someone was “held accountable.” Yet few would agree that firing someone for an error guarantees it will never happen again.

Punishment does not teach desired behavior, nor will it reveal the frequency with which others engage in the same practice. On the contrary, it simply guarantees that we will never know the real rate at which errors occur, thereby precluding any organizational learning and true system improvement. No organization has ever reprimanded its way to success.22,23 Even B.F. Skinner—the psychologist known for describing operant conditioning—in retrospect said that while using punishment as a tool for behavior modification worked well in rats, the technique was never successful at motivating voluntary behavior improvement in humans.24 What this approach does create, however, is manipulation, reprisal and resentment in people who, when confronted with the evidence of their failures, will avoid, deny and fabricate to avoid the pain, shame, judgment, ridicule, blame and embarassment.24 This is not true accountability; this is backward-looking accountability.

A key precept of a safety culture is understanding that accountability is about trust10—the trust that the public, our stakeholders and our customers place in us to ensure we provide them with the safest possible services. FORWARD-looking accountability, on the other hand, means we as an organization and its employees will do everything necessary to actually make improvements, even if it means the organization itself must assume some, if not all, of the responsibility for a bad outcome.10,20

Understanding Permits Learning; Context is Everything!

The core problem with the Bad Apple approach is actually quite succinct: we learn nothing. We learn nothing about our systems, how to make improvements, how to increase safety and ultimately how to achieve real accountability—forward-looking accountability. Human error must be a starting point for investigation, not an end point—it begs explanation, it is not a conclusion (this includes any folk models we use to disguise “human error,” such as “loss of situational awareness,” “loss of crew resource management,” “failure to ...,” “they should have …,” etc.).25 The key to improving quality and safety is to figure out why the decisions people made and the actions people took made sense to them at the time. The problem lies not in the mind, but in the situation that the mind found itself.1,7 Explanations of human behavior are only relevant in the context it was produced.9 We have to be willing to view the event as an insider with the information, cues, environment, tools and focus of attention that the providers had at the time. If you ever find yourself asking, “How could they not have known?” then you are viewing the event from a biased, retrospective outsider standpoint and have to work harder at understanding why the actions and decisions of insiders made sense. This can be accomplished using Sakichi Toyoda’s “Five Whys” technique: simply asking “Why?” successively of each answer produced to the previous “Why?” question.20

Systems are imperfect, equipment is faulty, humans are fallible.26 People in safety-critical jobs do not come to work to screw up, to make mistakes, deliver overdoses, intubate the esophagus or design lousy processes. What people did or were doing at the time of an incident made sense given their prioritization of competing goals, imperfect knowledge, deficient systems, ambiguous cues, focus of attention, limited tools and faulty technology. Otherwise they would have done something else—after all, their lives and their livelihoods are at stake.1,4,5,7

 Highly Reliable Organizations Engineer Resilience

Organizations that safely navigate high-risk environments—which have come to be known as high reliability organizations or HROs, e.g., aircraft carrier operations, nuclear power generation, air travel operations, etc.—are those that are not only open to bad news, but actively look for it and create non-punitive systems for reporting it. HROs do not exhort people to follow the rules, they actively seek to understand and are sensitive to the context of errors, how things actually work “in the streets,” and then develop processes that support predictably fallible human beings.7,9,16 The processes they develop are not attempts to “dumb down” the services providers deliver, but to create barriers, redundancy and recovery processes that address the ubiquitous cognitive (mental) and physical vulnerabilities of human beings so that their customers are more than one human error away from harm. They promote cultures that value safety and adopt managerial practices that balance system and personal accountability in support of their values. In a culture of safety, a high-reliability organization is a meta-learning organization, actively seeking to learn how it learns about safety in the system, identify latent error and understand why it has drifted into at-risk behaviors.5,23 Learning is about modifying an organization’s basic assumptions and beliefs. It is about identifying, acknowledging and influencing the real sources of operational vulnerability and continually expanding our capacity to understand complexity, clarify vision, improve our mental models and create the results we desire.7

Ultimately, it takes a tremendous amount of personal and organizational courage and humility to overcome the momentum of sociopolitical pressures, face the lynch-mob, stand up for the virtue of true forward-looking accountability and trade indignation for an understanding. Perhaps then we will realize that the set of fingerprints we found on the wreckage is really nothing more than that—a set of fingerprints.

i The term “complex” and the phrase “complex system” do not mean complicated. The concept of complex systems is one that “consists of numerous components or agents that are interrelated in all kinds of ways. They keep changing in interaction with their environment, and their boundaries are fuzzy. It can be hard to find out (or it is ultimately arbitrary) where the system ends and the environment begins.”4 Complex systems arise because of and are held together by local interactions only, and are highly dependent on initial conditions. The components of the system do not bear all the traits of the system, but because of the interconnectedness of the componentry and their relationships, “the action of any single agent controls little but influences almost everything.”4,5

References
  1. Dekker S. The Field Guide to Understanding Human Error. Burlington, VT: Ashgate Publishing Co., 2006.
  2. Wilkinson, S. Pilot Magazine. 1994. https://www.pprune.org/aviation-history-nostalgia/350529-true-story-2.html.
  3. Gluck, PA. Medical Error Theory. Obstetrics and Gynecology Clinics of North America. 2008; 35: 11–17.
  4. Dekker S. Patient Safety: A Human Factors Approach. New York: CRC Press, 2011.
  5. Dekker S. Drift Into Failure. Burlington, VT: Ashgate Publishing Co., 2011.
  6. LeSage P, Dyar JT, Evans B. Crew Resource Management: Principals and Practice. 1st ed. Sudbury, MA: Jones and Bartlett Publishers, 2011.
  7. Dekker S. The Field Guide to Investigating Human Error. Burlington, VT: Ashgate Publishing Co., 2002.
  8. NAEMT. Position Statement: EMS Patient Safety and Wellness. 2009. https://www.naemt.org/Libraries/Advocacy%20Documents/8-14-09%20EMS%20Patient%20Safety.sflb.
  9. Reason, J. Managing the Risks of Organizational Accidents. Burlinton, VT: Ashgate Publishing Co., 1997.
  10. Dekker S. Just Culture: Balancing Safety and Accountability. Burlington, VT: Ashgate Publishing Co., 2007.
  11. National EMS Culture of Safety. Strategy for a National EMS Culture of Safety. Draft v3.1. May 2012. https://www.emscultureofsafety.org/wp-content/uploads/2011/12/EMS_Culture_of_Safety_DRAFT_3.1.pdf.
  12. NAEMT. Position Statement: EMS Practitioner Safety and Wellness. 2009. https://www.naemt.org/Libraries/Advocacy%20Documents/8-14-09%20EMS%20Practitioner%20Safety.sflb.
  13. Marx D. Whack-A-Mole: The Price We Pay for Expecting Perfection. Plano, TX: By Your Side Studios, 2009.
  14. Marx D, Griffiths S. Just Culture for Healthcare Managers. The Just Culture Community—Moderated by Outcome Engineering. www.justculture.org.
  15. Flin R, O’Connor P, and Crichton M. Safety at the Sharp End: A Guide to Non-Technical Skills. Burlinton, VT Ashgate, 2008.
  16. Reason J. The Human Contribution: Unsafe Acts, Accidents and Heroic Recoveries. Burlington, VT: Ashgate, 2008.
  17. Botwinick L, Bisognano M, Haraden C. Leadership Guide to Patient Safety. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement, 2006. www.IHI.org.
  18. Hughes RD, ed. Patient Safety & Quality: An Evidence-Based Handbook for Nurses. Rockville, MD: Agency for Healthcare Research and Quality (US), Apr 2008.
  19. Balestracci D. Data Sanity: A Quantum Leap to Unprecedented Results. Medical Group Management Association, 2009.
  20. Snook S. Friendly Fire: The Accidental Shootdown of U.S. Blackhawks Over Northern Iraq. Princeton, NJ: Princeton University Press, 2000.
  21. Morris CG, Maisto AA. Understanding Psychology. 9th Ed. New York: Prentice Hall, 2010.
  22. Studer Q. Results that Last: Hardwiring Behaviors That Will Take Your Company to the Top. Hoboken, NJ: John Wiley & Sons, Inc., 2008.
  23. Collins J. Good To Great. New York, NY: HarperCollins Publishers, 2001.
  24. Glenn HS. Developing Capable Young People. [Audio CD]. 1998.
  25. Dekker SW, Hollnagel, E. Human Factors and Folk Models. Cognition, Technology, & Work. 2004; 6: 79–86.
  26. Marx D. Patient Safety and The Just Culture. The Just Culture Community. Outcome Engineering. [DVD]. 2011.
  27. NAEMT. Position Statement: Just Culture in EMS. 2012. https://www.naemt.org/Libraries/Advocacy%20Documents/Just%20Culture%20in%20EMS.sflb.
  28. Dekker S. Ten Questions About Human Error. New York, NY: CRC Press, 2005.
  29. Taylor-Adams S, Brodie A, Vincent C. Safety Skills for Clinicians: An Essential Component of Patient Safety. Journal of Patient Safety, 2008; 4(3): 141–7.
  30. Meadows S, Baker K, Butler J. The Incident Decision Tree: Guidelines for Action Following Patient Safety Incidents. Agency for Healthcare Research and Quality. https://www.ahrq.gov/downloads/pub/advances/vol4/Meadows.pdf.
  31. Duthrie EA. Application of Human Error Theory in Case Analysis of Wrong Procedures. Journal of Patient Safety, 2010; 6(2): 108–114.
  32. Institute of Medicine. To Err is Human. Washington, D.C.: National Academy Press, 2000.
  33. Gawande A. The Checklist Manifesto: How to Get Things Right. New York, NY: Metropolitan Books, 2009.
  34. Conway J, Federico F, Stewart K, et al. Respectful Management of Serious Clinical Adverse Events (Second Edition). IHI Innovation Series. Cambridge, MA: Institute for Healthcare Improvement, 2011. www.IHI.org.
  35. Reason J. Human Error. New York, NY: Cambridge Press, 1990.

 

 

 

 

Advertisement

Advertisement

Advertisement