Employing HPI tools and practices can help reduce safety and reliability incidents, increase collaboration, and lead to better decision-making.

When Lights Went Out in a New England Town

May 24, 2021
Event teaches engineers that people are fallible, errors are predictable, and culture matters.

You know the heart-sinking sound of a circuit breaker unexpectedly opening, plunging you into darkness. Electric service has been accidentally interrupted. Many professionals in electric utility substation field work know the sound.

It’s a sound that tells you that you’ve let down the utility and your team. More importantly, the event has also let down the utility’s customers. Their heating or cooling, their education or entertainment, their livelihood, or their care have been suddenly put in the dark, even if only for a few minutes. It’s a feeling that is unforgettable, especially among practitioners having enough experience that the gray hairs dominate the others under their hard hats.

Almost four years ago, it was our team that let down a good client in New England along with their customers. The experience propelled our transition from a team that talked about human performance and deployed testing barriers in substations to one that employs human performance improvement (HPI) tools routinely and fluently in all our critical activities. Recent conversations with that client have highlighted how far we’ve come and the road that remains in front of us. This is what we know so far.

HPI Benefits from a Common Language

In 2009, the U.S. Department of Energy (DOE) published its two-volume HPI Handbook. Among the wealth of useful information in these two volumes are the five principles of human performance, paraphrased in Fig. 1.

Early in our journey, our teams and leaders sometimes used different terms and descriptions when we meant the same thing. Worse yet, we also used the same terms and descriptions when we meant different things. We needed a common, shared HPI vocabulary. So we wrote a book.

Our Human Performance Improvement Pocket Guide is compact enough to keep nearby. It is quick and easy to refer to when planning a task, when briefing a team, or during an after-action review. The guide describes common unfavorable factors or conditions that increase the chances of an error, referred to as error precursors. It also describes the HPI toolkit, those activities most likely to mitigate error risks under specific conditions when used.

Next, we began to use the error precursors and toolkit as elements of our Job Hazard Analysis briefings. We modified our project kickoff documentation to refer to lessons learned from earlier, similar work. Our HPI training refers to the pocket guide frequently to increase understanding. And — in keeping with DOE guidance — we document findings of our incident reviews in terms of the latent organizational weaknesses, flawed controls, and error precursors surrounding the initiating action that led to the event.

Routinely using common terms has accelerated our adoption and understanding of HPI principles.

The Starting Point

HPI starts before project kickoff. Perceived time pressure is one of the most common error precursors. It tempts people to take shortcuts, resulting in errors that later require time to resolve, increasing time pressure on subsequent actors.

Time pressure is also everywhere. Every important project has a deadline by which completion is expected. A leadership responsibility in HPI is to improve the likelihood of team success by leading decisions and setting examples that actively reduce rather than increase error precursors such as time pressure.

We start by making good choices about the projects we offer to support. Chasing everything could cause our team to operate under higher time pressure than pursuing a realistic suite of project commitments. It also uses up the precious margin that any project’s natural schedule dynamics may need.

One result is that we’re less profitable in the short run than we would be if we put our team under greater pressure to perform more work. More importantly, though, our team members experience less time pressure, which allows them to experience fewer error-producing situations and permits them to be more responsive to client needs of the moment.

Most of the time, team members get more rest, reducing fatigue, and they have more time to properly prepare between projects. These circumstances improve team performance, accuracy, and problem-solving capacity. These better conditions also reduce burnout and turnover. We believe these benefits lead to safer, happier, and more engaged staff, better-served clients, and greater profitability in the long run.

Track Undesired Event

Rigorously track and learn from unintended outcomes. There is always more to an unintended event than the initiating action. Returning to the DOE Human Performance Handbook, its anatomy of an event identifies latent organizational weaknesses, flawed controls, and error precursors as contributors in various degrees to every unwanted or undesired event. Reviewing events to determine the other contributing factors pays dividends. First, it’s necessary to know what events are taking place.

Soon after the New England event mentioned above, we instituted a policy requiring that every unintended event be reported up from the field, through project and administrative leadership to division and executive leadership within 24 hours — faster under selected circumstances. To set a foundation of trust in connection with reporting, we established two more policies around events:

  1. We do not discipline individuals who make honest mistakes, providing they contribute to the identification and correction of those errors and the sharing and activation of lessons learned from outcomes produced by the error.
  2. We do hold accountable those who perform recklessly or in willful violation of company or client policy.

Having set conditions to create visibility of all events, we began examining every event from an HPI standpoint, as mentioned briefly above. These incident reviews take the form of an after-action review (AAR). See below.

The lessons learned and recommendations from the incident review are shared through weekly full-team safety meetings, as scenarios and examples in ongoing training, and in summaries shared with executive leadership for transparency. Themes or consistent areas of weakness are targeted for ongoing training development. We are also developing tools that help remind teams to mitigate risk in work areas that have shown a higher than average propensity to produce an event, such as tests involving current differential CT circuits and lockout relays.

AARs Accelerate Improvement

High-reliability organizations like military special forces groups, civilian first responders, and aircraft flight crews often use comparable methods to operate in complex domains with high resilience and low instance of catastrophic failures. These organizations are required by their missions to learn fast under dynamic conditions. Their leadership and team-building methods offer important lessons for utility field teams. We have found high value in AAR.

AAR is based on four questions:

  1. What was expected to happen?
  2. What actually occurred?
  3. Why did the job happen that way?
  4. What will we do differently next time?

Successful AARs produce lessons for future improvement, but a well-led AAR also builds trust and transparency within the team and organization. We encourage individuals to emphasize their own opportunities to improve performance instead of putting teammates on the defensive by pointing out their errors.

In particular, leaders set the tone for respectful, productive discussion and ensure all voices are heard. AARs should produce notes that document recommendations and these should be made available to staff outside the team for everyone’s benefit.

Ask the Hard Question

Ask the hard question from curiosity; appreciate the candid answer for its risk. As leaders, we have the responsibility to ask questions about team performance of all sorts. As humans, we have the capacity to ask those questions with empathy, celebrating the joy of a hard-won success or sharing the embarrassment of a difficult loss.

When the questions are hard, approach them from a place of true curiosity. Suspend your judgment of who made the mistake by assuming from the start that they did not intend the negative outcome. One of the easiest ways to do this is by avoiding the question, 'why?'

From our earliest childhood, the question, "Why did you…?" generally follows a mistake, a poor choice, or other negative outcomes. As a result, the question itself naturally puts most adults into a defensive posture. This reduces the ability to collaborate and eventually erodes trust.

Instead, use 'what' or 'how' phraseology to put difficult questions. Ask "What were you trying to achieve?" or "How did that affect the outcome?" These questions help you learn why a course of action was taken and, when asked with true curiosity, can increase problem-solving mindsets and create trust.

Candid answers — those providing the leader with information they don’t necessarily want or jeopardizing the respondent’s role or reputation — carry risk for the respondent. As a leader, your first response to such an answer needs to be: "Thank you."

Unintended events produce ample opportunity for difficult questions and answers. An organization that truly desires to improve through lessons learned should practice asking hard questions non-judgmentally and receiving candid answers with gratitude.

Learn from the positive outcomes too. Most projects produce some level of success. Fortunately, only a few produce the sorts of negative lessons as memorable as our New England outage.

Be intentional about learning from the positive outcomes too! Sharing positive lessons learned celebrates successful teams and accelerates everyone’s performance by reinforcing the desire to "do that again!"

Actively and Continuously Share Lessons Learned

We are learning to never stop the sharing of even older lessons learned. As people change roles and enter or leave the organization, they have the opportunity to take new responsibilities and move into positions where they are not only required to make decisions but also to recognize and mitigate risks before they activate as errors.

HPI tools are only as effective as our ability to deploy them at the right moment. The sense of when a team or activity is at risk is one that needs continuous refreshment.

The individual at the center of a bad outcome recalls that outcome and its circumstances easily. This is the Hot Stove analogy at work. Stretching that analogy a bit, if you saw a hot stove touched but didn’t touch it yourself, you may or may not recall that you can get burned. Someone who only heard the story later is even less likely to own the lesson so more likely to make the mistake.

We encourage our team with direct experience in a lesson learned to share that experience with their teammates themselves. When they include details about how they’ll recognize and avoid that situation in the future, it makes the sharing richer and more memorable for the peers who hear it directly. We are still working to create more opportunities for the sharing of these war stories for their durable, positive impact on identifying and mitigating risk.

An outside adviser in HPI who helped us early in our journey told us: “You haven’t learned your lessons until you can point to specific, consistent behavior changes those lessons produced.”

We agree. It is also a high bar to measure against. People tend to revert to past behavior when under pressure. Leadership is required to hold the line of new values, standards, and behaviors. There is no more important time to maintain your standards than when it is obviously costly to do so.

Summing Up

Implementing an HPI practice that goes deep by engaging our team’s safety, technical, and commercial decisions has been challenging. It is much more difficult than simply validating that the teams properly use flagging and visual barriers around devices under test.

Not only is it intellectually and culturally difficult, but sometimes it is also hard to prove — even to ourselves — that events and bad outcomes that might have occurred did not. However, there is a satisfaction in taking this journey. We see:

  • Reductions in major and minor safety incidents
  • Greater collaboration
  • Better decision-making

We are encouraged by these observations and the positive feedback from our team and clients. We will continue to invest here. By sharing the thoughts and lessons above, we hope to join with other like-minded peers so we can all learn together.

Voice your opinion!

To join the conversation, and become an exclusive member of T&D World, create an account today!