fig1-ieee-distribution-relaiability-regional-areas.jpg

Utilities Get a Handle on System Reliability

Nov. 1, 2012
The IEEE Distribution Reliability Working Group is developing benchmarks to assist utilities in quantifying and improving their system reliability.

In the electric power industry, there is the evolving issue and question of how reliable is reliable enough. In addition, discussion is ongoing about how much should be spent to get to the next level of reliability. Underlying these issues are the fundamental definitions of various reliability indices and what might be the industry norms.

To understand the basic reliability performance of the power system, the IEEE's Distribution Reliability Working Group undertook the development of key reliability metrics. The work began with the intention of benchmarking across North America to evaluate power system reliability. However, the group quickly discovered the methods of measuring or calculating power system reliability were not consistent. Leveraging work initially developed by Roy Billinton in the 1980s, the IEEE Distribution Reliability Working Group performed additional work during the late 1990s that resulted in indices shaped to align specific consistent measures relevant to power system operations reliability.

With the indices as a foundation, the IEEE Distribution Reliability Working Group now performs an annual benchmarking study to assemble a wide spectrum of reliability results among peer utilities.

Frequency and Duration

How often does the average customer experience an interruption? When a service interruption occurs, how long does it take to restore power on average? Throughout the measurement period, what is the cumulative time the average customer's service is interrupted? These questions led to indices that practically every utility uses:

  • System average interruption frequency index (SAIFI)

  • Customer average interruption duration index (CAIDI)

  • System average interruption duration index (SAIDI).

Analysis of performance data led the working group to recognize there was a normal class of interruption events and a class for extreme events, such as ice storms and hurricanes, which are labeled major events. Leading up to 2003, the Distribution Reliability Working Group performed an exhaustive analysis of data across the industry. This work led to the industry's concept and measurement of major events, which now rely heavily on many years of daily interruption data.

Why Interruptions Happen

Interruptions occur when fault events happen. The purpose of system protection is to fail safely and stop the flow of electricity. Fault events can be the result of a lightning strike, a traffic accident or a piece of equipment reaching its end of life. When these events occur, system protective equipment interrupts the flow of electricity to allow the system to be restored safely.

Restoration can be completed by dispatching trouble crews or by use of certain automated equipment. Measuring how often and where these events occur, and looking for patterns, is important for utility engineers and operational staff to resolve persistent reliability issues.

Benchmark Study

Another important method for answering the question of how reliable is reliable enough is to evaluate performance among peer utilities. To support this aspect of reliability assessment, the Distribution Reliability Working Group began a benchmarking study. The annual study is designed to assemble a wide spectrum of reliability results. Participants generally have been from North America, although there is no location restriction on utilities who can submit data.

Participating utilities range from small systems serving just a few thousand customers to multimillion-customer systems. Utilities are given an anonymous identifier and segmented by size and continental region. Each utility provides summarized daily reliability data, which is processed to segregate major event reliability from day-to-day events. The summary daily data includes the number of customers across the system that were interrupted during each day and the total minutes of customer interruption. This is divided by the number of customers served by the system, which leads to daily SAIFI and SAIDI values.

The resulting performance index for both the day-to-day/IEEE performance (see Recent Trends sidebar) and the total SAIDI, SAIFI and CAIDI are charted, and quartiles for performance of each index are identified, with the regional designations and anonymous utility codes. Year-on-year comparisons of quartiles are made as well as year-on-year average continental reliability performance.

At no time is a listing produced identifying which utilities have participated. Utilities may choose to share their utility code with stakeholders, such as regulators; however, they are not allowed to compromise any other participant's anonymity. The presentation materials are shared with the IEEE Distribution Reliability Working Group and the graphic data is available on the working group's website. Only contributing utilities are allowed access to the anonymous data for subsequent granular analysis.

Anyone using benchmark results needs to be cautious about what conclusions they draw from the data. An analyst is advised that they need to know what parameters may be driving results, whether customer density, environmental exposure, customer growth or other influencers.

Quartile Performance over Time

Through the history of the benchmark study, quartiles for performance of SAIDI, SAIFI and CAIDI have been calculated. In spite of the fact participation in the study may vary, there is benefit to evaluating the quartiles of those that participated year after year. In early years, it is expected performance levels for each of the quartiles changed because of interruption data-collection process changes. Thus, a certain general upward trend in indices is shown, followed by a downward trend in interruption frequency (SAIFI). This may be attributable to having access to complete and accurate data on which system improvement efforts can be conducted; however, at the same time, it appears it is taking more time to restore an interruption on average, as measured by CAIDI.

Average Customer Experience over Time

All benchmark participant data is combined to produce the average customer experience for all utilities represented. This shows no obvious significant upward or downward trend to be discerned. Rather, the trend appears to be basically constant for both CAIDI (interruption restoration) and SAIFI (interruption frequency), with slightly more variation in SAIDI (interruption duration). This is a bit of a contrast to the quartile performances, which suggest that shifting of individual participants may cause variations not experienced when looking broadly across the continent.

Enhanced Data Set Features

As the Distribution Reliability Working Group has continued to evaluate areas where it should gather additional information, two have surfaced. First, being able to characterize the circuit demographics — be they rural, urban or suburban — is expected to yield great benefits over time.

Second, since transmission system events can result in a substantial impact to distribution system performance, the 2011 study collected the impact of those interruptions. The performance of the participants has been sorted in an enhanced graph of each reliability index. An example is provided in the nearby Enhanced 2011 SAIDI chart which is sorted by IEEE ranking that excludes major events. The fundamental colored bars are regionally coded and identify distribution-only performance. The pink segments on top of each colored bar show the transmission impact on the index that was seen by the distribution customers. Finally, gray bars show the effect of major events for that participant.

Enhanced 2011 SAIDI ordered by IEEE (day-to-day) SAIDI ranking.

Other Benchmarking Activities

The IEEE Distribution Reliability Working Group is heavily vested in the use and proper application of the reliability indices. It views the output of this study as an important tool for utilities to use in their individual reliability analysis. It is complemented by other work done by the working group. Notably, the working group has just prepared P1782: Guide for the Collection, Categorization and Utilization of Reliability Data, which touches on this benchmark as well as a variety of other reliability topics.

To ensure consistent approaches to benchmarking and the application of reliability metrics, it also collaborates with such organizations as Lawrence Berkeley National Laboratories on research into trends in reliability. The Working Group also is actively striving to be the principal industry source for developing conclusions about reliability measurements and its application to improve reliability. The working group further wishes to ensure conformity to reliability standards, promote understanding of the calculation and application of key reliability metrics, and facilitate development of future reliability metrics to support more effective analysis of distribution system performance.

SIDEBAR: Recent Trends

Charts are available for each reliability index: SAIDI, CAIDI and SAIFI. When the charts indicate “IEEE,” it means the major event data has been removed from the calculated index (sometimes also called day-to-day performance). “Total” is a reflection of all interruptions recorded during the period. Comparing the total SAIDI against IEEE SAIDI, it appears there were many minutes and events attributable to major events. This tends to indicate extreme weather was experienced. Further assessment suggests this was observed within the northeast and mid-Atlantic regions, because of the substantial shift of those regional color bars between the chart pairs.

To view the IEEE Distribution Reliability Working Group Benchmarking data and graphics, visit http://grouper.ieee.org/groups/td/dist/sd/doc/.
 

Acknowledgment

The author would like to acknowledge Roy Billinton, whose early metric work provided a substantial foundation for subsequent work; the Distribution Reliability Working Group, including its past and current leadership, notably Cheri Warren (National Grid), who launched the benchmarking effort with the support of Dan Ward (Dominion Virginia Power), John McDaniel (National Grid), Rodney Robinson (Westar Energy) and Val Werner (We Energies); the membership that developed IEEE 1366-2003 for the advancement of major event data analysis; and PacifiCorp leadership for its support of the activities advanced by the IEEE DRWG.

Heidemarie C. Caswell ([email protected]) is the director of T&D asset performance for PacifiCorp, owned by MidAmerican Energy Holdings Co. She is a member of the IEEE and is heavily involved in the Working Group on Distribution Reliability as well as the NERC Performance Analysis Subcommittee, which focuses on bulk power system reliability metrics. She holds a BSCE degree from the University of Washington and is a registered professional engineer.

Companies Mentioned:

IEEE | www.ieee.org

Lawrence Berkeley National Laboratories | www.lbl.gov

Voice your opinion!

To join the conversation, and become an exclusive member of T&D World, create an account today!