People

Bert Zwart

Professor of Stochastic Operations Research at TU/e department of Mathematics and Computer Science 

Unravelling the mathematics of power failures

Mathematics can help transform the power grid into a smart grid, in the same way it helped to transform the classical phone network into the internet. That is the strong belief of Bert Zwart, Professor of Stochastic Operations Research at TU/e. He uses probability theory to explain why, when and where large black outs are most likely to occur.

‘The energy transition will lead to increased decentralization, complexity and uncertainty in daily, hourly and instantaneous operation of the grid. On the supply side, the growing amount of energy generated by intermittent sources such as wind and solar panels increases the uncertainty about peak loads. On the demand side, new types of demand emerge, such as electric vehicles.

In an interesting experiment in 2017 in Lochem, colleagues from Twente University studied to what extent local grids are capable of handling the loads that we can expect if more people buy electric vehicles. All inhabitants of a certain neighborhood were promised a bottle of champagne when they were able to blow the fuse at a district level. By the time the third Tesla started charging, the blackout was a fact.

In our research, carried out at the Stochastic operations research section within the Mathematics department, we look at the influence of electric vehicle charging on the stability of the grid, at preventive maintenance for wind turbines with reinforcement learning, and at the reliability of the grid through rare event simulations. These types of problems are really awesome for a mathematician like me, since they are very hard and of severe complexity.

Complex systems
Power systems are more complex than most other complex systems. You have to deal with many parameters, and loads of intertwined branches in the network. One of the things we are currently working on, is understanding large blackouts from a probabilistic point of view. Usually, one problem can be mitigated, but two at the same time simply is too much. So the question is: what are the chances of two rare events happening at the same time, and what determines the size of the resulting blackouts?

For us as mathematicians, the biggest question is what to include in our models and what to leave out. For example, in parts of the US, it is rather common for squirrels to eat parts of the infrastructure. This leads to small, local black outs. And when multiple of these small black outs occur all at once, chances are that the grid will not be able to cope. Does this mean that I have to include squirrel attacks in my models as well, however rare their concurrent incidence might be?

So when we start building a model, we first need to determine what are key microscopic details of the system that cause macroscopic black outs. We then use the theory of large deviations from statistical physics to compute the probability that a rare event occurs. And we determine the most likely way that rare event will occur, if at all. Such an analysis can reveal the weakest link in a system.

Tail behavior
In statistics, rare events are determined by so-called tail behaviors. Large blackouts obey Pareto laws, which is a so-called heavy tail law. In heavy-tailed distributions, a high-amplitude population is followed by a low-amplitude population which gradually tails off asymptotically. The events at the far end of the tail have a very low probability of occurrence. These types of heavy tail distributions are not that well understood. People run simulations, but is very hard to predict the tails. As part of my Vici-project, Tommaso Nesti, Fiona Sloothaak and I came up with a different approach.

We have developed a mathematical model that models power grids as graphs with heavy-tailed sinks, which represent demand from cities. We modeled the energy demand to be proportional to the amount of people living in the city, and determined the network flows and line capacities. Then we let a line trip and let the failures propagate through the network. If other lines get overloaded, they will fail as well.

This cascading failure approach turned out to work rather well when validated against data from the US. We demonstrated how extreme variations in city sizes are the main reason for the scale-free nature of blackouts. The main parameter that determines how fast the probability of a big black out vanishes as its size grows, is completely determined by the city size distribution. This means that network upgrades might not be the most effective way to mitigate the consequences of big blackouts. Instead, it may be more effective to invest in responsive measures that enable consumers to react to big blackouts.

Probabilistic reliability
Some follow up questions we would like to investigate are for example what would be a fair price for insurance against blackouts, and how much storage we would need if we would be close to 100 percent renewables and wanted to limit the amount of shortages to say 30 minutes per year. Perhaps it is this type of probabilistic reliability for the power grid we should adopt for the future, instead of clinging on the current deterministic idea that everything should always work.’