Artificial Intelligence: the New Energy

March 18, 2019
Advancements in AI stand to benefit the energy sector but come with own limitations and practical concerns.

As with all emerging technological trends, some elements of artificial intelligence (AI) are hyped out of proportion, some elements are ahead of their time and some even incite fear. However, there remains some truth beneath the hype, cycles and buzzwords. Advancements in AI stand to benefit the energy sector but come with own limitations and practical concerns.

Examples in Energy
In energy, there are quite a few interesting examples in both the retail and the commercial space:

Fault prediction and dynamic maintenance: This is one of the clearest uses of AI, and enables operators to predict equipment failures. It does this by using sensor data from various units, and significantly reduce their costs of downtime and maintenance. A start-up, Verv, is offering a meter device which identifies individual home appliances and tries to predict faults and alert when devices are accidentally left on.

Investment optimization: BP’s venture arm invested in an AI start-up called Beyond Limits, enabling them to dig through seismic images and geological models to increase the chances of success when drilling wells.

Energy efficiency: Deepmind, a part of Google, has championed the use of Reinforcement Learning to reduce energy use in their data centres by a claimed 15%. The model learned by looking at years of operational data and then issued changes to individual units.

Better prediction: Deepmind has also recently announced talks with National Grid to better forecast demand of the system, with the stated goal of reducing the entire country’s energy usage by 10%.

Trading: Origami Energy uses machine learning to predict asset availability and market prices in near real time, enabling them to successfully bid into the Frequency Response markets. Pöyry is also exploring a deep learning algorithm to support trading and dispatch decisions for generation assets in the prompt trading markets, focusing on the issue ‘when should I commit a trade’ (to maximize the option value of flexible capacity).

Retail: Retailers are using ML to understand patterns of customer behavior, to attract and retain customers and even to predict bill (non)-payment. Customer call centers are being fronted by algorithms which chat to customers (verbally or online) and deal with queries verbally.

Customers: For customers, AI solutions are also gaining traction, and many retailers are offering these systems as part of an integrated package. Devices such as Amazon’s Alexa enable the customer to seamlessly interact with their thermostat and control systems (such as Centrica’s Hive). This increasing customer interaction with the device leads to the development of a more personalized usage profile, which reduces bills for the consumer and helps the energy provider to accurately forecast demand.

What’s in a name?
Currently, AI, Machine Learning (ML), and their other counterparts like Deep Learning, Reinforcement Learning, etc., have seen wide coverage in a variety of industries. But what do all these terms mean? AI is a broad term and its scope varies but the idea is simple - adaptive intelligence displayed independently by a machine, in which the behavior is not necessarily pre-determined, but which adapts to data inputs. AI in informal settings is used interchangeably with ML, but in reality, ML is a subset of AI. Deep Learning and Reinforcement Learning are promising areas within ML.

Within AI and next to ML are the fields of robotics, speech recognition, computer vision, etc., which are key building blocks towards enabling machine intelligence. ML is the use of statistics to give computers the ability to learn from data. This differentiation is key because fast advances in ML have led to the sudden interest in AI worldwide. The initial set of improvements have been in the underlying algorithms and data architectures, but the current key improvements are just two: data and computation.

Developments in data are driven by smartphone uptake and improvements in sensors, supported by revolutionary breakthroughs in data communications and data storage technologies. This has made many more datasets available than ever before, which allows for in-depth scrutinization enabling more accurate predictions to be made. The developments seen in computation can be attributed to dramatic increases in processing power, enabling algorithms to tackle many parameters simultaneously and conduct the same amount of computation in parallel rather than in sequence, saving a great deal of time.

What is AI / ML generally used for?
Most ML methods are suited to tackling two key problems: prediction type problems and classification type problems. Prediction type problems include, "Can I predict when this equipment will fail?’" (If so, I can deploy maintenance before failure to ensure the plant doesn’t grind to a halt, while saving on unnecessary maintenance).

Classification type problems include questions such as, "Is this customer different from another, based on the data I have from them?’" (If so, I can further study the differences and maybe deploy a new marketing program to retain them).

The key requirement to enable these ML predictions has been the need for clean and useful datasets. For this reason, the ML method that has been showing the most potential recently is Deep Learning, a type of model which can extract complex patterns and sequences in a dataset. In challenging areas such as speech recognition and image recognition, Deep Learning models have seen more success than traditional rules-based approaches or detailed expert systems and has led to a marked increase in accuracy of the prediction, well beyond what was possible before. Some popular examples of products which use Deep Learning (amongst other models) are Siri, Cortana and Google Translate for speech/text recognition. The Google Translate model, for example, was trained on large amounts of EU and UN documents online which provide the same text professionally translated into different languages.

The most promising area with AI and ML is Reinforcement Learning, which involves training software agents towards a certain goal through rewards – in a sense mimicking how humans learn. This, when combined with Deep Learning, has led to powerful strides towards accurate prediction systems and is the key algorithm being used to drive autonomous vehicles.

A particularly interesting example of Reinforcement learning is AlphaGo, which was the first computer program able to beat a high-ranking professional player in the complex board game Go. It did this through playing against itself and other repeatedly in order to know how to make the right move out of the billions of combinations possible. In short, the core idea of ML methods is that as long as the data and computational power are available it is possible to augment and even automate decision-making by creating some sort of expert system.

The future of these expert systems lies not only in enabling automation, but also in aiding complex decisions. Nowadays, computational power is easy to acquire (even for a short-term basis), and common algorithms are reasonably well known. The major investment required is in the form of time needed to acquire and assemble data, remove any mistakes from it, and assess different algorithms to see which one delivers the best performance.

Despite all the upsides, AI comes with many caveats.

  • What happens if there is a low volume of data available for the ML model to learn from? Can it contextualize between two similar tasks and transfer learnings from one to the other?
  • How can AI systems be protected against false (perhaps maliciously-introduced) data?
  • As some of these models are essentially black boxes, can the model users understand why the model took a particular action?
  • Will the AI systems learn to collude or break through regulatory ringfences?
  • Can the model take the right decision when it faces a new unforeseen environment?
  • And, as decisions are increasingly driven by AI outcomes, will the underlying system converge, or will the outcomes be unstable?

To some degree, Reinforcement Learning coupled with intelligent model design with safety constraints and external controls can allay many of these concerns (this is being used for Autonomous Vehicle technology). Techniques will develop to combine historic-based AI outcomes with anticipated future changes in the fundamentals (e.g. new interconnection, changes in market rules), but these questions will persist, e.g. how should a car react in an earthquake if it never had a dataset under earthquake conditions?

As the standards of AI decision support improve, the interface with humans must adapt. Initially, humans must learn to trust the systems, even though the results cannot fully be explained. Techniques will be found to blend humans’ anticipation of the future with existing (historical) data to augment today’s algorithms, in what might be termed "augmented artificial intelligence" (in which the AI is augmented by human knowledge, not the other way around).

And ultimately, as the algorithms become more robust and are given more autonomy to act without human intervention, we need to ensure that appropriate monitoring, alerts and controls are put in place. That being said, AI or rather ML as it stands, is a powerful tool for prediction and classification problems, as long as the data to learn from exists. In non-critical business applications, ML is uncovering value in almost every application where past predictive data exists. The caveats must be put in context: human behavior and existing prediction methods are far from perfect, and AI should not be compared with an impossible benchmark.

For now, AI/ML coupled with better analytics, improvement in sensors and robotics can help automate the small directed issues entirely and let us focus on the unstructured problems of tomorrow. As succinctly put by Andrew Ng, “AI is the new electricity – enabling us to do more.”

About the Author

Ravi Mahendra | analyst

Ravi Mahendra is an analyst at Pöyry Management Consulting involved in European gas modeling projects using Pöyry’s proprietary market modeling software called Pegasus. Previously he was an O&G equity research intern at Tudor Pickering Holt & Co (now Perella Weinberg) and energy modeler at KBC Technologies (now Yokogawa Electric) where he applied ML techniques for oil refineries. Ravi has a MEng in Chemical Engineering from the University of Manchester.

Voice your opinion!

To join the conversation, and become an exclusive member of T&D World, create an account today!