Sandia National Laboratories’ Research Team Develop AI Algorithms to Detect Physical Issues, Cyberattacks within Grid
Researchers at Sandia National Laboratories have developed AI algorithms to detect physical problems, cyberattacks and both at the same time within the grid.
“As more disturbances occur, whether from extreme weather or from cyberattacks, the most important thing is that operators maintain the function and reliability of the grid,” said Shamina Hossain-McKenzie, a cybersecurity expert and leader of the project. “Our technology will allow the operators to detect any issues faster so that they can mitigate them faster with AI.”
Adrian Chavez, a cybersecurity expert involved in the project, added that as the neural network runs on single-board computers, or existing smart grid devices, it will be able to protect older equipment as well as the latest equipment lacking only cyber-physical coordination.
The package of code works at the local, enclave and global levels. At the local level, the code monitors for abnormalities at the specific device where it is installed, at the enclave level, devices in the same network share data and alerts to provide the operator with better information on whether the issue is localized or happening in multiple places and at the global level, only results and alerts are shared between systems owned by different operators.
Thus, operators will receive early alerts of cyberattacks or physical issues their neighbours are seeing and protect proprietary information. The Sandia team collaborated with experts at Texas A&M University to create secure communication methods, between grids owned by different companies, Hossain-McKenzie added.
According to Logan Blakely, a computer science expert who led development of the AI components, the challenge in detecting cyber-physical attacks is combining the constant stream of physical data with intermittent packets of cyber data.
Physical data such as the frequency, voltage and current of the grid is reported 60 times a second, while cyber data such as other traffic on the network is more irregular, Blakely said. The team used data fusion to extract the important signals in the two different kinds of data.
The team also used an autoencoder neural network, which classifies the combined data to determine if it fits with the pattern of normal behavior or there are abnormalities with the cyber data, physical data or both, Hossain-McKenzie said. For example, an increase in network traffic is predicted to indicate a denial-of-service attack while a false-data-injection attack can include atypical physical and cyber data, Chavez added.
Autoencoder neural networks do not need to be trained on data labelled with every type of issue coming up as compared to other kinds of AI, Blakely said. However, the network only requires huge amount of data from normal operations for training.
After the team constructed the autoencoder neural network, they put it to the test in three different ways. First, they tested the autoencoder in an emulation environment, which includes computer models of the communication-and-control system used to monitor the grid and a physics-based model of the grid itself, Hossain-McKenzie said.
The team used the environment to model a variety of cyberattacks or physical disruptions, and to provide normal operational data for the AI to train on. The collaborators from Texas A&M University assisted with the emulation testing.
Secondly, the team incorporated the autoencoder onto single-board computer prototypes that were tested in a hardware-in-the-loop environment, Hossain-McKenzie said. In hardware-in-the-loop testing, researchers connected a real piece of hardware to software simulating various attack scenarios or disruptions.
When the autoencoder is on a single-board computer, it can read the data and implement the algorithms faster than a virtual implementation of the autoencoder can in an emulation environment, Chavez said. Hardware implementations are a hundred or thousand times faster than software implementations, he added.
The team is working with Sierra Nevada Corporation to test Sandia’s autoencoder AI on the company’s existing cybersecurity device called Binary Armor, Hossain-McKenzie said.
The team is testing both formats, single-board prototypes interfaced with the grid and the AI package on existing devices, in the real world at the Public Service Company of New Mexico’s Prosperity solar farm as part of a Cooperative Research and Development Agreement, Hossain-McKenzie said. The tests started in summer 2024, Chavez said.
The project expanded upon a previous R&D 100 Award-winning project called the Proactive Intrusion Detection and Mitigation System, which focused on detecting and responding to cyber intrusions in smart inverters on solar-panels, Hossain-McKenzie said. The team is expanding upon the autoencoder AI in similar projects, she added.
The team filed a patent on the autoencoder AI and is looking for corporate partners to deploy and hone the technology in the real world, Hossain-McKenzie said. The autoencoder is expected to be used to protect other critical infrastructure systems such as water and natural gas distribution systems, factories, even data centers, Chavez said.
The project is funded by Sandia’s Laboratory Directed Research and Development program.