As a means to improve distribution system reliability, FLISR (fault location, isolation, and service restoration) is a potentially powerful capability being implemented by many utilities. In a 2014 Department of Energy study of five utilities with FLISR capability in place, FLISR was found to reduce the number of customers interrupted (CI) by up to 45%, and the customer minutes of interruption (CMI) by up to 51% for an outage event. The basic idea of FLISR is to quickly identify the location of a fault and then isolate the faulted area as tightly as possible such that the impact of the power outage associated with the fault (both in terms of the duration and the numbers of customers, or load, affected) is minimized. Of course, this is the goal of distribution operators everywhere. But FLISR implies the use of system intelligence, remote control devices, and communications networks to achieve this goal in a more optimal way than would otherwise be achievable.
FLISR is referred to as a “self-healing” grid capability. And, although, not fully self-healing, the goal is to help ensure a more optimal grid configuration under sub-optimal operating conditions. As such, it requires an accurate model of the grid, accurate information about grid operating conditions, some level of remotely operated switching capability, and optimizing algorithms to make the best of an imperfect situation.
FLISR is not a one-size-fits-all, off-the-shelf solution. Certainly, there are Advanced Distribution Management System (ADMS) packages that, because of the level of application integration they provide, can enable a FLISR capability. But utilities have varying levels of automation in the field, different preferences for the level of system automation, and different operational challenges that FLISR must address.
The initial step in the FLISR process is fault location. Locating the fault is a prerequisite to all future actions – and, with many utilities, an opportunity to improve reliability regardless of the level of automation available to support fault isolation and grid reconfiguration.
Faults on the distribution system can be caused by a variety of factors – from equipment or cable failures, structural damage to facilities from weather-related events, and animal encroachments. In some cases, where physical damage to the system is reported by customers, identifying the source and location of the fault can be accomplished quickly. However, in many cases, the source and location of the fault is not obvious and service restoration activities are delayed until the fault location can be identified. Faults on underground systems can be especially challenging to locate, since physical inspection at those locations is extremely difficult or impractical.
Utilities have employed a variety of methods and technologies to assist with fault location. Some of these involve advanced technologies, while others are highly manual and time-consuming. Commonly used methodologies can generally be grouped into one of the following three categories:
1. Portable test equipment such as cable thumpers or time domain reflectometry (TDR) devices typically used to support underground fault location
2. Permanently installed line indicators – either non-communicating visual indicators, or communicating sensors installed on overhead or underground feeders
3. Permanently installed equipment at substations or operations centers
- Protective relays and digital fault recorders
- Voltage drop triangulation methods
- Impedance-based fault location methods.
Operations center- or substation-based solutions that attempt to measure the location of the fault electronically and topologically can be used to support FLISR. These solutions typically gather data on faulted circuits from power quality meters or other waveform data sources. However, given the multiple branches created by laterals on most distribution systems, a single measurement or data source may not be adequate. Additional information may be needed to narrow or eliminate the set of potential fault location solutions that would be generated by these technologies. Communicating fault indicators, deployed strategically on the system, can provide the necessary information to confirm a likely fault location with greater precision. SCADA measurements or control points may be able to provide this corroborating information as well. AMI data can also be a source of corroborating data if it is received in a timely fashion. In addition, some utilities, prioritize outage flags and restoration flags from AMI meters and those can be leveraged to manage outages and restoration efforts. This AMI-based information may be received from a collection of bellwether meters on the AMI network that are used for monitoring the electric grid, as opposed to being primarily billing meters.
Even in the absence of supporting auto-isolation and grid reconfiguration technologies, effective fault location can have significant impact on improving reliability metrics. In systems that rely primarily on manual fault location methods, approximately 20-30% of total outage duration time (perhaps 45-60 minutes) can be attributable to fault location time. Using an intelligent fault location application, this time could be reduced substantially, facilitating a faster service restoration – especially on underground circuits, where the potential source of the fault may not be identified without some sort of excavation. Effective fault location technology can also be applied to examining the likely causes of momentary outages, which can otherwise be challenging to identify and resolve.
The goal of the fault isolation component of FLISR is to minimize the impact of an outage through a reconfiguration of the distribution grid where this impact can be evaluated in several ways. Widely used metrics include total number of customers or the total load effected by an outage. Today’s ADMS solutions typically support reconfiguration, or grid optimization, capability. However, other factors, such as critical loads affected, may need to be taken into consideration as well. With the proliferation of distributed energy resources (DER) in some service areas, the impact of circuit reconfiguration on DER operation must be considered, as well as the protection scheme and worker safety implications. The latter two are related to the two-way nature of power flow in the distribution system when DER is permitted to inject into the system, and this must be taken into account when the isolation and reconfiguration/optimization is employed. With these kinds of considerations, fault isolation/grid reconfiguration becomes increasingly complex and higher levels of optimization capability are required.
In addition to identifying the optimal grid reconfiguration for an identified fault, the distribution management system must be able to eliminate potential grid reconfigurations that are not supportable. Therefore, before an optimal solution can be implemented, time-forward simulations should be performed to evaluate future loading and DER production to ensure that the new system configuration (and any intermediate steps required to achieve this configuration) will not result in violations during the forecasted period.
As with fault location, there are many different approaches to fault isolation and circuit reconfiguration among utilities. Many utilities have implemented decentralized fault detection and isolation schemes that react in real-time to a detected fault. These decentralized schemes can be expected to provide an immediate grid reconfiguration that narrows the impact of an outage. However, it is more common for utilities to deploy centralized grid management applications (protective relaying, ADMS, DMS/OMS) for FLISR capabilities. The centralized capability may be in addition to decentralized schemes, requiring some level of coordination and awareness at the centralized level. While some utilities have adopted fully automated FLISR operations, others adopt more semi-automated schemes, with the final reconfiguration action dependent on operator review and/or manual switching operations.
The reconfiguration associated with isolating the fault really begins the service restoration process by narrowing the extent of the outage and returning more customers to service. The remainder of the service restoration process remains largely manual. However, the fault location process can be expected to accurately pinpoint where the repair crews need to focus their efforts. This is especially valuable on underground circuits where, otherwise, a time-consuming and expensive process to locate the source of the fault would likely be required before repairs can begin. With integration between the fault location application and the work management application, the coordinates for and/or a map of the likely fault location can be transmitted to the field crew. This, together with in-field access to an “as-built” GIS map of the grid can speed the restoration process. In-field access to the GIS map also provides the ability for the field crew to update configuration/circuit information back into GIS post-restoration – maintaining the alignment between the physical grid and the grid model.
FLISR has many faces, in that it can encompass various technologies and different approaches to realize its goal of minimizing the outage impact of faults on customers. With good choices, the potential benefits of a successful FLISR capability can be significant relative to its cost. In developing a roadmap and business case for FLISR, utilities should consider how the full range of today’s technologies (as well as existing data and infrastructure investments) can be leveraged to improve each aspect of fault location, isolation, and service restoration.
 U.S. Department of Energy, Fault Location, Isolation, and Service Restoration Technologies Reduce Outage Impact and Duration, December 2014.
 For this document, we are considering a “fault” to be a permanent condition following lockout of the protective relaying, and which requires personnel intervention to correct.
 Leaving aside the obvious manual circuit inspection for overhead construction.
 The most common are system average interruption duration index (SAIDI), system average interruption frequency index (SAIFI), customer average interruption duration index (CAIDI), and customer average interruption frequency index (CAIFI).