Tdworld 18920 Mac 1

Why the Graph Database Can Bring New Efficiencies to the Grid

June 25, 2019
The graph database approach optimizes each element of the grid in real time, fixes things rapidly when any point in the grid fails.

Those who work in utilities know all too well how complicated it can be to live and work in a hyper-connected world. Success depends on balancing supply and demand across an extraordinarily complex network, and none is perhaps more complex than the typical electricity supply grid.

In one sense, the typical supply grid is a microcosm of the world. Although it consists of a set of individual entities, what truly defines these entities is not their own characteristics, but their relationships with others. These relationships are complicated. Most importantly, within them are concealed all sorts of dependencies, some obvious, some less so.

For example, one node (a substation, for example) in the network going down clearly has an impact on others around it. But what happens if another falls? We are now in a scenario of multiple network failures that may or may not compound each other in terms of their impact. And these are the scenarios that keep us awake at night and require huge amounts of redundancy to be built into the typical network.

What if there was a better way?

At present, our existing tools for mapping networks and modeling operations and failures are not really designed with networks in mind at all. They emerge from the world of the relational database. Fundamentally, the relational database stores and files information about discrete entities. It’s a new-fangled version of a theory of data storage that goes back to cuneiform tablets, library cards, and everything in between. And it is terrible at modeling connections.

Within traditional systems, it is almost impossible to model scenarios involving network failure. It is also almost impossible to understand just how important any given node in a network is. Sure, we can have a reasonable idea what happens to its immediate neighbors if it does down, but what about knock-on effects? What about compounded dependencies? It is simply impossible to really get a handle on the real-world implications.

As a result, we don’t spot the most important points in the network, we need to use huge amounts of processing power just to manage simple analyses, and we have to build expensive redundancies into the network. And when things go wrong, it takes longer to establish how to fix them and in what order.

The Graph Database Alternative

Fortunately, there is an alternative: the graph database. Graph databases put connections and relationships at the heart of their approach. In other words, the connections are the network. It is a completely new approach, and (finally) a break from Mesopotamia!

The network-centric approach has a number of interesting benefits, across a whole range of industries, but in electricity generation and distribution it is particularly powerful. By centering connections, we have a real, visual sense of what the real world looks like. That in itself makes a huge difference.

It also helps us enforce ‘the rules’ that relate to that real world. I spoke recently to Kevin Feeney, CEO of leading graph database company DataChemist, who put it like this: “if we don’t model the world in terms of relationships, we can’t identify when something is wrong. If, for example, we have a rule that more than x properties cannot connect to a single substation, it isn’t possible to see when that rule gets broken for whatever reason. With the graph database, we can and we can immediately.”

But lastly, and most importantly, the graph database approach brings new efficiency to our ever more adaptive grids. It allows us to optimize each element of that grid in real time, maximize efficiency, model scenarios easily, and fix things rapidly when any point in the grid fails (because we can immediately see the impact of failure and the fastest route to resolution).

The graph database approach offers great opportunity for the electricity industry. Smart operators should investigate further.

DataChemist was founded in 2018 as a startup with a single goal: to make data meaningful. That means bringing a new way of thinking about data to the market, one built around exposing the connections and relationships that remain hidden within traditional methods, and helping customers understand the implications of those connections.

"We're defined by our connections, and only by understanding them can we really make any judgment about any person or entity" says CEO Kevin Feeney. "We add that understanding even when for whatever reason, some might want to keep connections hidden."

Voice your opinion!

To join the conversation, and become an exclusive member of T&D World, create an account today!