Simulating power markets with competitive self-play

In this post we present an unpublished article from 2009 that uses self-play to evolve bidding strategies in congested power markets with nodal pricing.

Reinforcement learning using self-play is currently one of the most fascinating areas of research within artificial intelligence. Papers like Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm makes for inspiring reading, and I’m sure we’ll see a host of real world applications in the future. These current trends in reinforcement learning came to mind as an old piece of unpublished research I wrote surfaced some weeks ago. In the article I used a genetic algorithm (GA) – all the rage back in 2009 – to learn the weights in a neural network which provide the strategy for competitive generator companies in a simulated power market. Using this setup I could demonstrate well known Nash equilibria (NE) in common market situations, as well as simulate competition in intractable market setups.

I wrote the article while I worked as a scientific researcher at Sintef Energy AS back in 2009 and the work was financed by Sintef. Also, the topic is not that interesting to pursue as a TSO, however the methods proved interesting enough to write a post about it here. Thanks to Sintef for letting me publish it here!

The entire article draft can be downloaded here

The market participants

The paper starts off with a description of market participants (generators). The generators must bid their capacity into the power exchange. Each generator can behave strategically by adding a markup to the marginal cost, thereby attempting to achieve a higher profit, but at the risk of losing market shares. The markup is the output of the neural network which we will come to.

fig1

For convenience we use a linear marginal cost function with a scalar markup.

Nodal pricing and market clearing

The power exchange collects all the market participant’s bids (this includes demand). It then has to find the intersection of the aggregate supply and demand curves while taking transmission constraints into account. In other words, the power exchange must find the generation by all generators that minimizes the system cost. With linear bids and a linearized version of the power flow equations, this is a quadratic optimization problem.

The problem formulation implies that each node in the power system can have a unique power price (if there are constraints in the network).  The nodal power prices are inferred from the Lagrangian multipliers of the equality constraints; the Lagrangians are the costs of producing one additional MW of energy at a node. This setup roughly reflects several real world power markets.

To solve this quadratic optimization problem we can employ what is called a DC optimal power flow. The Python package pypower handles all this for us. A limited amount of coding should let you add a variable markup to each generators marginal cost function.

After the market is cleared, each participant gets paid the price at the node that it resides, even if it was consumed at a node with a different price. Profit from trade between differently priced nodes, congestion rent, devolves to the system operator.

Evolving strategies using a genetic algorithm

The genetic algorithm (GA) is used to evolve strategies for the market participants. Each market participant has its own pool of n strategies that are evolved using the GA. The aim of each participant is to maximize its profits.

At each generation g of the simulation the participants meet several times, these are the iterations. At each iteration, each participant randomly selects a strategy from its pool of strategies. This strategy is evaluated to yield a bid B_u that is submitted to the market clearing algorithm. The market clearing returns participants profits that are stored in memory. After the desired number of iterations have passed, the fitness of a strategy, s, for unit, u, at generation, g, F_{u,s,g}, is calculated as the sum of profits it received divided by the number of times it was played. Thereafter, selection, crossover, and mutation are applied to produce the new generation of strategies and a new generation is started.

In pseudocode:

  1. Initialize and load power system structure, g = 0
  2. g = g + 1, i = 0
  3. i = i + 1
  4. For each participant u, select a random strategy s from the pool
  5. Find the bid of each participant B_u, given the random strategy
  6. Evaluate the market clearing algorithm given the bids of all players and store the achieved profits
  7. If i < numIterations goto line (3), else:
  8. Calculate the fitness of each strategy F_{u,s,g} as the average of all profits received this generation
  9. Perform selection, crossover and mutation of all strategies
  10. If g < numGenerations goto line (2)
  11. Simulation ends

The number of iterations is selected such that all strategies are played on average ten times each generation. In this paper, the standard number of strategies for each participant was n = 50. In other words, there were 500 iterations per generation.

Representing the strategy as a neural network

A strategy is the rule by which the participants determine their bid, in this case the markup. In mathematical terms, a strategy is a function that takes a set of inputs and returns a decision (markup). This function has parameters that determine the nature of the relationship between inputs and output; it is these parameters that the GA work upon.

Inspired by research on the Iterated Prisoner’s Dilemma (IPD), the inputs to the strategy are taken to be the markup and the price attained when the strategy was last played. The loose resemblance to an IPD, is that the markup is one’s own level of cooperation and the price corresponds to the cooperative level of the competitor(s). Using these two inputs, the strategy essentially has a memory of one round and is able to evolve tit-for-tat strategies and similar, i.e. if you bid high last turn I also continue to bid high this turn. The amount of information taken as input could easily be extended to allow for more sophisticated strategies, maybe this could be an area for recurrent neural networks.

A convenient way of creating a parametrized function, that can represent a strategy as described above, is a multilayer feed-forward neural network. It can approximate any function arbitrarily well, given sufficient neurons. In the article I used a small neural network with one hidden layer. The weights and biases in the neural network were then evolved using the genetic algorithm. A modern approach would maybe use deep q-learning or a similar approach instead.

Perfect competition and co-operative equilibria

The model is primarily used to set up market situations that model classical game theory situations which have been analyzed extensively in the literature. The idea is to show that self-play converges to well-known Nash-equilibria (NE).

The first part of the article shows some rather obvious results:

  • In a market with a lot of small generators and a surplus of capacity, the markup converges to zero
  • In a duopoly where each generator can serve exactly half the load, the markup converges to the highest allowed markup
  • In a duopoly where one generator alone can serve the load (Bertrand competition), the markup converges to zero

Conceptually, Bertrand competition bears some resemblance to the Prisoners Dilemma (IPD): If both generators co-operate by bidding high they get a good profit, however, it is always in the interest of the other to defect and bid marginally lower, thus mutual defection is ensured. In the IPD game it is known that players can evolve strategies that maintain a mutually cooperative level even though the equilibrium for both players is to defect if the game is played only once. In the normal IPD, there are only two choices, either full cooperation or full defection. A common extension of the game is to include several discrete levels of cooperation.

fig5
Market price in Bertrand competition for an initially random population (A), defecting population (B), and co-operative population (C). In all simulations, the marginal cost is 100, and the markup is eventually reduced to zero.

In a paper on the IPD from 2002, the authors noted that the chance of mutual cooperation dwindled as the possible choices in the IPD increased. Considering that each player in the Bertrand game can select any real positive number as her price (which can be thought of as the levels of cooperation), it is simply very unlikely to remain at a mutually cooperative level. This effect can explain why mutual cooperation is likely if the choice of each player is limited to only a few discrete price levels, but highly unlikely if the price can be selected from a continuum.

Other papers on agent based modelling limit the agents to select from a discrete set of choices. If the market situation resembles Bertrand competition, the above discussion might lead to think that mutual co-operation can occur, albeit with a small probability if the number of choices is large. However, it turns out that the similarity of the Bertrand and Prisoners Dilemma game disappears if the number of choices is limited. In fact, the outcome of simulated Bertrand competition with discrete levels of cooperation is quite arbitrary and depends on the exact markups the players can choose from.

Competition with capacity constraints

With capacity constraints, one generator alone can not serve the entire market like in Bertrand competition. The participant that bids higher than its opponent still captures some residual demand. A market player can thus follow two strategies:

  1. Try to undercut the other player to get a large market share and risk low prices, or
  2. set the markup high to make sure the market clearing price remains high and accept that the other player gets the larger market share.

In this case there exists a price where a market player is indifferent as to which direction it should diverge as long as the opponent sticks to its price. The price of indifference occurs when undercutting the opponent gives the exact same profit as bidding higher than the opponent and capturing the residual demand.

The price of indifference depends on the exact market setup. In the simulation below, the price of indifference was 160. The bid of player 1 stays high (to maximize profits) while the bid of player 2 stays below the threshold of 160. Due to random mutation in the genetic algorithm, the values don’t stabilize entirely. Who ends up as the high bidder, and thus worse of, is entirely random.

fig7
The two players undercut one another until, at the threshold price, player 1 starts increasing its markup.

Duopoly in a congested two-bus network

So far, the underlying electrical network has been kept uncongested to enable analysis of some well known economic games. In this section, the simplest possible constrained network will be analysed theoretically and simulated with the developed model. The network is depicted in the figure below. It consists of two buses with a generator and load each; the buses are connected by a transmission line capable of transferring K MW in either direction.

fig9
Two bus network with constrained transmission capacity

Moreover, the transmission capacity K is less than the demand at either of the buses and the generators are capable of serving the load at their own bus and at the same time utilize the entire transmission capacity.

Accordingly, the generators face a strategic choice:

  1. Bid high to raise the price at its bus and serve the load that the opponent cannot capture because of the limited transmission capacity; or,
  2. Bid low and serve all the load it possibly can cover.

At fi rst, this might seem like the Bertrand game with capacity constraints. However, there is one important difference: since the generators are paid the nodal price (and not a common system price), the low bidder can profitably raise its price towards that of the high bidder. The consequence is that there is no NE in pure strategies, resulting in a cyclical variation in the price over the course of a simulation.

fig11
Simulated markups in the two-bus network for 4 different transmission capacity levels K

Competition in a meshed and congested grid

This section presents results from an analysis of a small benchmark power system presented in a paper on agent based modelling. There are 3 strategically behaving generators in the system and all load is assumed to to be completely inelastic. All transmission lines are assumed to have sufficient transmission capacity with the exception of the line from bus 2 to bus 5 which is limited to 100 MW.

fig12
Power system diagram. The numbers close to the lines are reactances in per unit.
tab1
Generator data

The authors in the original paper analytically computed the NE and simulated the system using adaptive agents with a discrete set og markups. Two symmetrical NE were found; generator 3 should always bid its highest markup while either generator 1 or 2 bids with the highest markup and the other bids its marginal cost. The agent-based approach did discover the NE, but alternated between the two equilibria in a cyclical fashion. Here, the markups can be any real number on the interval [0; 30], otherwise the simulation setup is identical to that of the original work.

I find the same NE, however no cyclical behavior occurs. As soon as the agents have locked in on one NE the game stays in that equilibrium for the rest of the simulation. To discover several equilibria, the model must be rerun with random initial conditions.

fig13
Simulated markups for the three generators and corresponding consumer benefit, producer benefit and congestion rent on the right

The figure above shows simulated simulated markups on the left and the corresponding consumer benefit (about 2500), producer bene fit (15000) and congestion rent on the right. If all producers had bid according to their marginal cost, we would have achieved the socially optimal solution, which has producers benefit at 1056, and consumers benefit at 17322. Accordingly there has been a large transfer of benefit from consumers to producers due to market power.

If we double the capacity on the line between bus 2 and 5, market power is mostly eradicated. In addition, the total economic benefit to society increases slightly.

Ideas for further work

Even with the simple setup used in this work it is possible to evolve interesting strategies. However, the field of reinforcement learning has evolved a lot since 2009. One idea for further development would be to re-implement the ideas presented here using modern frameworks for reinforcement learning like open Ai gym and Tensorforce. Such a setup could possibly be used to evolve cooperative strategies with longer memory as well as simulate time-varying market situations (like in the real world).

 

Comparing javascript libraries for plotting

After some research on available javascript libraries for plotting we decided that plotly fits our needs the best. We also provide a small vue-wrapper for plotly.

Plotly.js renders fast and has a large selection of visualizations

We frequently plot quite large amounts of data and want to be able to interact with the data without long wait times. I use the term large data, and not big data, since we are talking about a typical dataset size on the order of some 100k datapoints. This is not really big data, but it is still large enough that many plotting packages have trouble with rendering, zooming, panning etc. Ideally we would like render times below about 500 ms.

To find the ideal plotting package we set up a small test with some promising plotting packages: Echarts, Highcharts, Plotly and Bokeh. We plotted around 300k datapoints from a timeseries and noted the render times in ms. The chart below displays the render times in milliseconds from five repetitions.Echarts and bokeh are canvas based, and render much slower than the other packages which are SVG based.

Additionally, we find our selves wanting a large selection of visualization types. Classic bar chart, line charts etc. just is not enough. Incidentally, plotly also offers a wide range of charts. Plotly is also available as a python and R package so we can build experience in one plotting package independently of programming language.

Vue-wrapper for plotly

One of our main front-ends uses the vue.js framework. To integrate plotly cleanly we created a vue-wrapper for plotly. Check it out on github:

A vue wrapper for plotly.js chart library
https://github.com/statnett/vue-plotly
29 forks.
119 stars.
29 open issues.

Recent commits:

 

 

Setting up a forecast service for weather dependent failures on power lines in one week and ten minutes

Combining Tableau, Python, Splunk and open data from met.no to deliver a realtime forecast of the probability of failure due to wind and lightning on overhead lines.

Our last post was about estimating the probability of failure for overhead lines based on past weather and failure statistics. The results are currently used as part of our long term planning process, by using a Monte Carlo tool to simulate failures in the Norwegian transmission system.

In this post you can read about how we re-used this model for a different use case: using the current weather forecast to predict probability of failure per overhead line. This information is very relevant for our system operators when preparing for severe weather events. We use data from the Norwegian Meteorological Institute, Tableau for visualization, Python for our backend and Splunk for monitoring. This post explains how.

Overview of the forecast service

The core delivery of the mathematical model described in our previous blog post are fragility curves per overhead line. To recall from the previous post, those fragility curves are constructed based on historical failures on overhead line combined with historical weather data for that line. The resulting fragility curves give us the relation between the weather event and the expected probability of failure. In our current model we have two fragility curves per line: one that describes the expected failure due to wind vs wind exposure, and one that models the expected failure due to lightning vs lightning exposure.

Now, when we have this information per overhead line, we can combine this with weather forecast data to forecast expected failure rates for all overhead lines. Our service downloads the latest weather forecast when it becomes available, updates the expected failure probabilities, and exports the results to file. We designed a Tableau dashboard to visualize the results in an interactive and intuitive manner.

The resulting setup is displayed below:

flow diagram service

In the remainder, we cover how we created and deployed this service.

Data input: fragility curves

See our previous blog post for an explanation of the creation of the fragility curves.

Data input: open weather data

The Norwegian Meteorological Institute (met.no) maintains a THREDDS server where anyone can download forecast data for Norway. THREDDS is an open source project which aims to simplify the discovery and use of scientific data. Datasets can be served through OPeNDAP, OGC’s WMS and WCS, HTTP, and other remote data access protocols. In this case we will use OPeNDAP so we don’t have to download the huge forecast files , only the variables we are interested in.


import logging
import numpy as np
import pandas as pd
import pyproj
import xarray as xr
from scipy.spatial import cKDTree
logger = logging.getLogger(__name__)
url = 'http://thredds.met.no/thredds/dodsC/meps25files/meps_det_extracted_2_5km_latest.nc&#39;
# using the powerful xarray library to connect to the opendap server
ds = xr.open_dataset(url)
lat = ds['latitude'][:]
lon = ds['longitude'][:]
time_values = ds['time'][:].values
dt_time = pd.to_datetime(time_values)
try:
wind_speed = ds['wind_speed'].load()
except Exception, e:
logger.error('Error in reading wind speed data', exc_info=True)
ds.close()
# in reality geo_object_list is list of power tower objects with coordinates
geo_object_list = []
def geo_object_coordinates():
return None
# collecting x,y coordinates for power towers
line_cartesian_positions = [
(x, y) for geo_object in geo_object_list for
(x, y) in geo_object_coordinates(geo_object)
]
target_points = map(lambda x_: [x_[0], x_[1]], line_cartesian_positions)
projection = pyproj.Proj(
"+proj=utm +zone=33 +north +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
)
# setting up model tree
lon2d, lat2d = np.meshgrid(lon, lat, sparse=True)
utm_grid = projection(np.ravel(lon2d), np.ravel(lat2d))
model_grid = list(list(zip(*utm_grid)))
# finding nearest point in model grid for all power towers
distance, index = cKDTree(model_grid).query(target_points)
y, x = np.unravel_index(index, lon.shape)
wind_speed_array = wind_speed.isel(x=xr.DataArray(x), y=xr.DataArray(y))[:, 0, :].values
# setting up pointers to geo_object_list
start_ids = [0]
start_ids.extend([len(geo_object) for geo_object in geo_object_list])
start_ids = np.cumsum(start_ids)
# creating dictionary of wind speed pandas dataframes
# each dataframe consists of as many column as there is power towers for the line
wind_speed_dict = {}
for i, geo_object in enumerate(geo_object_list):
wind_speed_dict[geo_object.object_id] = pd.DataFrame(
wind_speed_array[:, start_ids[i]:start_ids[i + 1]],
index=dt_time
)

The first thing to notice is that we select the latest deterministic forecast meps_det_extracted_2_5km_latest.nc and load this using xarray. An alternative here would have been to download the complete ensamble forecast to get a sense of the uncertainty in the predictions as well. Next, we select wind_speed as the variable to load. Then we use the scipy.spatial.cKDTree package to find the nearest grid point to the ~30 000 power line towers and load the corresponding wind speed.

Forecast service: service creation and monitoring

We have developed both the probability model and the forecast service in Python. The forecasts are updated four times a day. To download the latest forecast as soon as it becomes available we continuously run a Windows process using Pythons win32service package. Only a few lines of code were necessary to do so:


class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "TestService"
_svc_display_name_ = "Test Service"
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, ''))
self.main()
def main(self):
try:
import download_service64Bit
gw = execnet.makegateway("popen//python={}".format(r'C:\Python27_64Bit\python.exe'))
channel = gw.remote_exec(download_service64Bit)
sleeptime = 1
wait = win32event.WAIT_OBJECT_0
while win32event.WaitForSingleObject(self.hWaitStop, sleeptime * 1000) != wait:
channel.send(1)
sleeptime = channel.receive()
except BaseException as e:
# error handling – omitted in snippet

The actual logic of obtaining the fragility curves, downloading new forecasts, and combining this into the desired time series is included in the download_service64Bit module.

To ensure the service behaves as desired, we monitor the process in two ways. Firstly, we use Splunk to monitor the server’s windows event log to pick up any unexpected crashes of the service. The team will receive an email within 15 minutes of an unexpected event. Secondly, we will develop more advanced analytics of our log using Splunk. This is made very easy when using Python’s excellent logging module (a great introduction on why and how you should use this module can be found here).

Publishing results to Tableau

We use Tableau as a quick and easy way to create interactive and visually attractive dashboards. An example screenshot can be seen below:

full

In the top left corner, the information about the system state is shown. The percentage is the total probability of at least one line failing in the coming forecast period of three days. The dashboard shows the situation just before a major storm in January this year, leading to a very high failure probability. There actually occurred seven failures due to severe wind two days after the forecast. The bar plot shows which part of the estimated failure probability is due to wind versus lightning.

topleft
Overview of main system state

The main graph shows the development of failure over time for the whole power system. When no specific line is selected, this graph shows the development of the failure probability in the whole system over time:timechart

When one line is selected, it shows the details for this line: wind speed or lightning intensity, and estimated failure probability over time.

dd

The heat map at the bottom of the screen shows the probability of failure per overhead line per hour.

heatmap-1

The list of lines is ordered, listing the lines that are most likely to fail on top. This provides the system operator with a good indication of which lines to focus on. The heat map shows how the failure probability develops over time, showing that not all lines are exposed maximally at the same time. The wind/lightning emoji before the name of the line indicates whether that line is exposed to high wind or lightning.

Going from idea to deployment in one week and ten minutes

This weeks goal was to go from idea to deployment in one week by setting aside all other tasks. However, as Friday evening came closer, more and more small bugs kept popping up both in the data visualization as well as in the back-end.

IMG_2029
Friday afternoon debugging

In the end, we got the service at least 80% complete. To get the last things in place we might need just 10 minutes more…

Estimating the probability of failure for overhead lines

In Norway, about 90 percent of all temporary failures on overhead lines are due to weather. In this post, we present a method to model the probability of failures on overhead lines due to lightning.

Welcome to the blog for Data Science in Statnett, the Norwegian electricity transmission system operator. We use data science to extract knowledge from the vast amounts of data gathered about the power system and suggest new data-driven approaches to improve power system operation, planning and maintenance. In this blog, we write about our work. Today’s topic is a model for estimating the probability of failure of overhead lines.

Knowing the probability of failure is central to reliability management

For an electricity transmission system operator like Statnett, balancing power system reliability against investment and operational costs is at the very heart of our operation. However, a more data-driven approach can improve on the traditional methods for power system reliability management. In the words of the recently completed research project Garpur:

Historically in Europe, network reliability management has been relying on the so-called “N-1” criterion: in case of fault of one relevant element (e.g. one transmission system element, one significant generation element or one significant distribution network element), the elements remaining in operation must be capable of accommodating the new operational situation without violating the network’s operational security limits.

Today, the increasing uncertainty of generation due to intermittent energy sources, combined with the opportunities provided e.g. by demand-side management and energy storage, call for imagining new reliability criteria with a better balance between reliability and costs.

In such a framework, knowledge about failure probabilities becomes central to power system reliability management, and thus the whole planning and operation of the power system. When predicting the probability of failure, weather conditions play an important part; In Norway, about 90 percent of all temporary failures on overhead lines are due to weather, the three main weather parameters influencing the failure rate being wind, lightning and icing. In this post, we present a method to model the probability of failures on overhead lines due to lightning. The full procedure is documented in a paper to PMAPS 2018. In an upcoming post we will demonstrate how this knowledge can be used to predict failures using weather forecast data from met.no.

Data sources: failure statistics and weather data

Statnett gathers failure statistics and publishes them annually in our failure statistics. These failures are classified according to the cause of the failure. For this work, we considered 102 different high voltage overhead lines. For these there have been 329 failures due to lightning in the period 1998 – 2014.

We have used renanalysis weather data computed by Kjeller Vindteknikk. These reanalysis data have been calculated in a period from january 1979 until march 2017 and they consist of hourly historical time series for lightning indices on a 4 km by 4 km grid. The important property with respect to the proposed methods, is that the finely meshed reanalysis data allows us to use the geographical position of the power line towers and line segments to extract lightning data from the reanalysis data set. Thus it is possible to evaluate the historical lightning exposure of the transmission lines.

Lightning indices

The first step is to look at the data. Lightning is sudden discharge in the atmosphere caused by electrostatic imbalances. These discharges occur between clouds, internally inside clouds or between ground and clouds. There is no atmospheric variable directly associated with lightning. Instead, meteorologists have developed regression indices that measure the probability of lightning. Two of these indices are linked to the probability of failure of an overhead line. The K-index and the Total Totals index. Both of these indices can be calculated from the reanalysis data.

figur1
Figure 1 Rank and K index for lightning failures

Figure 1 shows how lightning failures are associated with high and rare values of the K and Total Totals indices, computed from the reanalysis data set. For each time of failure, the highest value of the K and Total Totals index over the geographical span of the transmission line have been calculated, and then these numbers are ranked among all historical values of the indices for this line. This illustrates how different lines fail at different levels of the index values, but maybe even more important: The link between high index values and lightning failures is very strong. Considering all the lines, 87 percent of the failures classified as “lightning” occur within 10 percent of the time. This is promising…

figur3
Figure 2: TT index versus K index shows seasonality trend

In Norway, lightning typically occurs during the summer in the afternoon as cumulonimbus clouds accumulate during the afternoon. But there is a significant number of failures due to thunderstorms during the rest of the year as well, winter months included. To see how the indices, K and T T , behave for different seasons, the values of these two indices are plotted at the time of each failure in Figure 3. From the figure it is obvious, though the data is sparse, that there is relevant information in the Total Totals index that has to be incorporated into the probability model of lightning dependent failures. The K index has a strong connection with lightning failures in the summer months, whereas the Totals Totals index seems to be more important during winter months.

Method in brief

The method is a two-step procedure: First, a long-term failure rate is calculated based on Bayesian inference, taking into account observed failures. This step ensures that lines having observed relatively more failures and thus being more error prone will get a relatively higher failure rate. Second, the long-term annual failure rates calculated in the previous step are distributed into hourly probabilities. This is done by modelling the probabilities as a functional dependency on relevant meteorological parameters and assuring that the probabilities are consistent with the failure rates from step 1.

Bayesian update

From the failure statistics we can calculate a prior failure rate \lambda due to lightning simply by summing the number of failures per year and dividing by the total length of the overhead lines. We then arrive at a failure rate per 100 km per year. This is our prior estimate of the failure rate for all lines.

When we observe a particular line, the failures arrive in what is termed a Poisson process. When we assume that the failure rate is exponentially distributed, we arrive at a convenient expression for the posterior failure rate \lambda^B:

\lambda^B = \frac{1 + \sum{y_i}}{\frac{1}{\lambda} + n}

Where n is the number of years with observations, \lambda is the prior failure rate and y_i is the number of observed failures in the particular year.

bayes
Figure 3: The prior and the posterior distribution. Dashed vertical lines are the
expectations.

Distributing the long term failure rates over time

We now have the long-term failure rate for lightning, but have to establish a connection between the K-index, the Totals Totals index and the failure probability. The goal is to end up with hourly failure probabilities we can use in monte-carlo simulations of power system reliability.

The dataset is heavily imbalanced. There are very few failures (positives), and the method has to account for this so we don’t end up predicting a 0 % probability all the time. Read a good explanation of learning from imbalanced datasets in this kdnuggets blog.

Many approaches could be envisioned for this step, including several variants of machine learning. However, for now we have settled on an approach using fragility curves which is also robust for this type of skewed/biased dataset.

A transmission line can be considered as a series system of many line segments between towers. We assume that the segment with the worst weather exposure is representable for the transmission line as a whole.

We then define the lightning exposure at time t:

w^t = \alpha_K \max(0, K^t_{max} - K_{\text{thres}})^2 + \alpha_{TT} \max(0, TT^t_{max} - TT_{\text{thres}})^2

Where \alpha_K, \alpha_{TT} are scale parameters, K_{max}^t is the maximum K index along the line at time t, TT_{max}^t is the maximum Total Totals index at time t along the line. K_{\text{thres}}, TT_{\text{thres}} are threshold values for the lightning indices below which the indices has no impact on the probability.

Each line then has an probability of failure at time t given by:

p_L^t = F(w^t; \sigma_L, \mu_L)

where F(\cdot) is the cumulative log normal function.

To find the standard deviation and expected value that describe the log normal function, we minimize the following equation to ensure that the expected number of failures equals the posterior failure rate:

\mu_L, \sigma_L = \underset{\mu, \sigma}{\text{argmin}} \: g(p^t_L; \mu, \sigma)

where

g(p^t_L; \mu, \sigma) = \left(\lambda^B - \frac{1}{k}\sum_{t=0}^T p^t_L\right)^2

If you want to delve deeper into the maths behind the method we will present a paper at PMAPS 2018.

Fitting the model to data

In this section simulation results are presented where the models have been applied to the Norwegian high voltage grid. In particular 99 transmission lines in Norway have been considered, divided into 13 lines at 132 kV, 2 lines at 220 kV, 60 lines at 300 kV and 24 lines at 420 kV. Except for the 132 and 220 kV lines, which are situated in Finnmark, the rest of the lines are distributed evenly across Norway.

The threshold parameters K_{\text{thres}} and TT_{\text{thres}} have been set empirically to K_{\text{thres}} = 20.0 and TT_{\text{thres}} = 45.0. The two scale parameters \alpha_K and \alpha_{TT} have been set by heuristics to \alpha_K = 0.88 and \alpha_{TT} = 0.12, to reflect the different weights of the seasonal components.

Results

The probability models presented above are being used by Statnett as part of a Monte Carlo tool to simulate failures in the Norwegian transmission system for long term planning studies. Together with a similar approach for wind dependent probabilities, we use this framework as the basic input to these Monte Carlo simulation models. In this respect, the most important part of the simulations is to have a coherent data set when it comes to weather, such that failures that occur due to bad weather appear logically and consistently in space and time.

figur4
Figure 4: Seasonal differences in K index and TT index for simulated results

Figure 4 shows how the probability model captures the different values of the K index and the Total Totals index as the time of the simulated failures varies over the year. This figure should be compared with figure 2. The data in Figure 4 is one out of 500 samples from a Monte Carlo simulation, done in the time period from 1998 to 2014.

The next figures show a zoomed in view of some of the actual failures, each figure showing how actual failures occur at time of elevated values of historical probabilities.

figur5figur6figur7figur8

 

%d bloggers like this: