How we quantify power system reliability

An important aspect of digital transformation for a utility company is to make data-driven decisions about risk. In electric power systems, the hardest part in almost any decision is to quantify the reliability of supply. That is why we have been working on methodology, data quality and simulation tools with this in mind for several years at Statnett. This post describes our simulation tool for long-term probabilistic reliability analysis called MONSTER.

The need for reliable power system services is ever increasing in our system as  technology advances. At the same time, the focus on carefully analysed and socio economic profitable investments has perhaps never been more essential. In light of this, several initiatives focus on having probabilistic criteria as the frame for doing reliability analysis. This post describes our Monte Carlo simulation tool for probabilistic power system reliability analysis that we have dubbed MONSTER.

Model overview

holisticOur simulation methodology is described in this IEEE PMAPS paper from 2018: A holistic simulation tool for long-term probabilistic power system reliability analysis (you need IEEE Xplore membership to access the article)

Here I’ll repeat the basics, ingnore most of the maths and expand on some details that have been left out in the paper.

Our tool consist of two main parts: A Monte Carlo simulation tool and a Contingency analysis module. The main idea in the simulation part is to use hourly times series of failure probability for each component.

We do Monte Carlo simulations of the power system where we draw component failure times and durations, and then let the Contingency analysis module calculate the consequence of each of these failures. The Contingency evaluation module is essentially a power system simulation using optimal loadflow to find the appropriate remedial actions, and the resulting “lost load” (if any).

The end result of the whole simulation is typically expressed as expected loss of load, expected socio-economic costs or distributions of the same parameters. Such results can, for instance, be used to make investment decisions for new transmission capacity.

Time series manager

The main principle of the Time series manager is to first use failure observations to calculate individual failure rates for each component by using a Bayesian updating scheme, and then to distribute the failure rates in time by using historical weather data.  We have described the procedure in detail in a previous blog post.

We first calculate individual, annual failure rates using observed failure data in the power system. Then for each component we calculate an hourly time series for the probability of failure. The core step in this approach is to calculate the failure probabilities based on reanalysis weather data such that the resulting expected number of failures is consistent with the long term annual failure rate calculated in the Bayesian step. This is about spreading the failure rate in time such that we get one probability of failure at each time step. In this way, components that experience the same adverse weather will have elevated probabilities of failure at the same time. This phenomenon is called failure bunching and it’s effect is illustrated in the figure below. In fact we will get a consistent and realistic geographical and temporal development of failure probabilities throughout the simulation period.

failurebunching
Failure bunching dramatically increases the probability of multiple independent failures happening at the same time.

In our model we divide each fault into eight different categories: Either the fault is  temporary or Permanent, and each of the failures can be due to either wind, lightning, snow/icing or unrelated to weather. Thus, for each component we get eight historical time series to be used in the simulations.

outages temporary failures

Biases in the fault statistics

There are two data sources for this part of the simulation. Observed failure data from the national fault statistics database FASIT, and approximately 30 years of reanalysis weather data calculated by Kjeller Vindteknikk.  The reanalysis dataset is nice to work with from a data science perspective. As it is simulated data, there are no missing values, no outliers, uniform data quality etc.

The fault statistics, however, has lots of free text columns, missing values and inconsistent reporting (done by different people across the nation over two decades). So a lot of data prep is necessary to go from reported disturbances to statistics relevant for simulation. Also, there are some biases in the data to be aware of:

  • There is a bias towards too long repair/outage-times. This happens because most failures don’t lead to any critical consequences, therefore we spend more time than strictly necessary repairing the component or otherwise restoring normal operation. For our simulation we want the repair time we would have needed in case of a critical failure. We have remedied this to some extent by eliminating some of the more extreme outliers in our data.
  • There is a bias towards reporting too low failure rates; only failures that lead to an immediate disturbance in the grid count as failures in FASIT. Failures that lead to controlled disconnections are not counted as failures in this database, but are  relevant for our simulations if the disconnection is forced (as opposed to postponable).

The two biases counteract each other, in addition this kind of epistemic uncertainty is possible to mitigate by doing sensitivity analysis.

Improving the input-data: Including asset health

Currently, we don’t use component specific failure rates. For example, all circuit breakers have the same failure rate regardless of age, brand, operating history etc. For long term simulations (like new power line investment decisions) this is a fair assumption as components are regularly maintained and swapped out as time passes by.

The accuracy of the simulations, especially on short to medium term, can be greatly improved by including some sort of component specific failure rate. Indeed, this is an important use-case in and by itself for risk based asset maintenance.

Monte Carlo simulation

A Monte Carlo simulation consist of a large number of random possible/alternative realizations of the world, typically on the order of 10000 or more. In our tool, we do Monte Carlo simulations for the time period where we have weather data, denoted by k years, where k~=30. For each simulation, we first draw the number N of failures in this period for each component and each failure type. We do this by using the negative binomial distribution. Then we distribute these failures in time by drawing with replacement from the k∗8760 hours in the simulation period such that the probability of drawing any particular hour is in proportion to the time series of probabilities calculated earlier. As the time resolution for the probabilities is hourly, we also draw a uniform random value of the failure start time within an hour. Then finally, for each of these failures we draw random duration of outages:

for each simulation do
for each component and failure type in study do
draw the number N of failures
draw N failures with replacement
for each failure do
draw random uniform start time within hour
draw duration of failure
feilrater
Convergence of the most and least frequent single- and double failure rates for the first 1000 simulations.

Outage manager

In addition to drawing random errors, we also have to incorporate maintenance planning. This could for example be done either as an explicit manual plan or through an optimization routine that distributes the services according to needs and probabilities of failure.

How detailed the maintenance plan is modeled, typically depends on the specific area of study. In some areas, the concern is primarily of a failure occurring during a few peak load hours when margins are very tight. At such times it is safe to assume that there is no scheduled maintenance – and we disregard it entirely in the model. In other areas, the critical period might be when certain lines are out for maintenance, and so it becomes important to model maintenance realistically.

For each simulation, we step through points in time where something happens and collect which components are disconnected. In this part we also add points in time where the outage pass some predefined duration, called outage phase in our terminology. We do this to be able to apply different remedial measures depending on how much time has passed since the failure occurred. The figure below illustrates the basic principles.

outagephases
Collecting components and phases to contingency list. Dashed line indicate phase 1 and solid line phase 2. In this example the two outages contribute to five different contingencies. In phase 1, only automatic remedial measures (like system protection) can be applied. In phase 2, manual measures, like activation of tertiary reserves can be applied.

One important assumption is that each contingency (like the five contingencies in the figure above) can be evaluated independently. This makes the contingency evaluation, which is the computationally heavy part, embarrassingly parallel. Finally, we eliminate all redundant contingencies. We end up with a pool of unique contingencies that we send to the Contingency Analysis module for load flow calculations in order to find interrupted power.

Contingency analysis

Our task in the Contingency analysis module is to mimic the system operator’s response to failures. This is done in four steps:

  1. Fast AC-screening to filter out contingencies that don’t result in any thermal overloads. If overloads, proceed to next step.
  2. Solve the initial AC load-flow, if convergence move to next step. In case of divergence (due to voltage collapse) attempt to emulate the effect of voltage collapse with a heuristic, and report the consequence.
  3. Apply remedial measures with DC-optimal load-flow, report all measures applied and go to next step.
  4. Shed load iteratively using AC load-flow until no thermal overloads remain. Report lost load.

In step 3 above, there are many different remedial measures that can be applied:

  • Reconfiguration of the topology of the system
  • Reschedule production from one generator to another
  • Reduce or increase production in isolation
  • Change the load at given buses

For the time being, we do not consider voltage or stability problems in our model apart from the rudimentary emulation of full voltage collapse. However, it is possible to add system protection to shed load for certain contingencies (which is how voltage stability problems are handled in real life).

As a result from the contingency analysis, we get the cost of applying each measure and the interrupted power associated with each contingency.

Load flow models and OPF algorithm

Currently we use Siemens PTI PSS/E to hold our model files and to solve the AC load flow in steps 1,2 and 4 above. For the DC optimal load-flow we express the problem with power transfer distribution factors (PTDF) and solve it as a mixed integer linear program using a commercial solver.

Cost evaluation – post-processing

Finally, having established the consequence of each failure, we calculate the socio-economic costs associated with each interruption. This is done by following the Norwegian regulation. The general principle is described in this 2008 paper, but the interruption costs have been updated since then. For each failure, we know where in the grid there are disconnections, for how long they last, and assuming we know the type of users connected to the buses in our load-flow model, the reliability indices for different buses will be easy to calculate. Due to the Monte Carlo approach, we will get a distribution of each of the reliability indices.

risk_matrix_mwh
Result example: Failure rate and lost load averaged over alle simulation for some contingencies (black dots).

A long and winding development history

MONSTER has been in development for several years at Statnett after we did an initial study and found that no commercial tools where available that fit the bil. At first we developed two separate tools: one Monte Carlo simulation tool in Matlab, used to find the probability of multiple contingencies, and an Python AC contingency evaluation module built on top of PSS/E, used to do contingency analysis with remedial measures. Both tools were built to solve specific tasks in the power system analysis department. However, combining the tools was a cumbersome process that involved large spreadsheets.

At some point, we decided to do a complete rewrite of the code. The rewriting process has thought us much of what we know today about modern coding practices (like 12-factor apps), and it has helped build the data science team at Statnett. Now, the backend is written in python, the front end is a web-app built around node.js and vue, and we use a SQL database to persist settings and results.

skjermbilde-stasjon.jpg
A screenshot from the application where the user defines substation configuration.

Prediction of rare events and the challenges it brings: Wind failures on power lines

Summer internship blog 2018: Comparing different weather data, evaluating their corresponding probabilistic predictions and trying to develop a better forecast model.

This summer we have had two summer interns working as part of the data science team at Statnett: Norunn Ahdell Wankel from Industrial Mathematics at NTNU and Vegard Solberg from Environmental Physics and Renewable Energy at NMBU. We asked them to look at our failure prediction models and the underlying data sources with a critical eye. They have written this blog post to document their work. At the end of the article we offer some reflections. Enjoy!

Statnett already has its own models used to forecast probability of failure due to wind or lightning on the transmission lines. If you want to know more about the lightning model and how it is used to forecast probability of failure we recommend you to have a look at some previous blog posts from April on the details of the model and how it is used. Both of the models are, however, based on another climate data source (reanalysis data computed by Kjeller vindteknikk) than what is used as input in realtime forecasting (weather forecast from Norwegian Meteorological Institute – Met). With this in mind, together with the fact that Statnett’s models have not yet been properly evaluated or challenged by other models, we aim to answer the following questions:

  1. How similar is the weather data from Kjeller and Met?
  2. How good is the model currently in use, developed by the Data Science team?
  3. Can we beat today’s model?

Short overlap period between the weather data from Met and Kjeller

We must have an overlap period of data from Met and Kjeller to be able to compare the differences in weather. Unfortunately, we only had four full months of overlap.  This period corresponds to February 1 – May 30, 2017, for which we have hourly data for each tower along 238 transmission lines. There were only failures due to wind and none due to lightning in the overlap period. Therefore we have chosen to only focus on the wind model and hence the wind speed for the rest of this blog post.

Kjeller reports stronger wind

Kjeller gives a wind speed that on average is 0.76 m/s higher than Met in the overlap period. However, this is not the case for all lines. We get a feeling of the variation between lines by averaging the difference (Met minus Kjeller) per tower within a line and sort the difference in descending order. This is shown in the second column of the table below, from which it is clear that there is variety, and not a “trend”. The mean differences ranges from +1.55 to -2.74 m/s. Also noting that only 58 out of the 238 lines in total have a positive mean difference, the trend, if speaking of one, is that Kjeller records a higher wind speed on average for most lines.

me
Table that in several ways summarizes the difference between wind speeds reported by Kjeller and Met.

The third and fourth columns display the number of hours in which at least one tower along a line had a wind speed above 15 m/s (i.e that the max speed over the whole line exceeded 15 m/s in that hour). Why exactly 15 m/s? Because Statnett’s model for wind failures uses this threshold, meaning that a wind speed lower than this does not affect the wind exposure and therefore not the probability of failure. We see quite a huge difference between Met and Kjeller. For 179 of the 238 lines Kjeller reported a larger number of hours than Met. To visualize the difference better we can plot the difference in number of hours the max wind speed is above 15 m/s between Met and Kjeller as a histogram, see figure below.

hist_blog
On a majority of the transmission lines, there are more hours with wind speed above 15 m/s in the Kjeller dataset.

Clearly the histogram also shows how Kjeller reports more hours than Met for most lines. However only very few of them have an absolute difference of more than 200 hours.

Larger variance in Kjeller data

We found that Kjeller has a greater average variance within a line than Met. By finding the standard deviation at a line at each specific time point, and thereafter averaging over all times, we get the values reported in the columns “Mean std per hour”. In 199 cases Kjeller has a larger standard deviation than MET, indicating that Kjeller reports more different values along a line than Met.

Kjeller also has a higher variance throughout the whole overlap period. We calculated the standard deviation over all hours for each tower, and then found the mean standard deviation per tower along a line. The numbers are found in the columns “mean std per tower”. They tell us something about how the wind speeds for a tower tend to vary during the whole time period. It turns out that Kjeller has a higher standard deviation in 182 of the cases.

Is there any geographical trend?

Having summarized  wind speeds for each line we were curious to see if there could be a geographical trend. We looked at the mean differences when grouping the transmission lines based on which region they belong to, hence either N, S or M (north, south, mid). This is illustrated in the plot below.

my_fig
Plot of the mean difference in wind speed per tower for all lines, grouped by region. Difference is taken as Met-Kjeller. Each transmission line belongs to one region, either N (north), S (south) or M (mid). On average the wind speed is higher in the Kjeller dataset.

As seen from the plot “N” is the region that stands a bit out from the other, where Kjeller is mainly lower than Met. The spread in values is however large for all regions. Maybe another grouping of the data would have revealed stronger trends. One could for instance have investigated if there is any trend tower-wise based on latitude and longitude coordinates for each tower. What would also have been interesting to do is to somehow plot the results on a map. By adding the grid points for both Met and Kjeller one could more clearly have seen if the location of these grid points could explain some of the differences in weather.

Keep in mind that…

The wind speed computed by Kjeller is from a height of 18 metres above the ground, while it is 10 metres for Met. We are not sure how much difference there is due to this fact only, but one probably could have found ways to adjust for this height difference. The resolution for Met is 2.5 km while 1 km for Kjeller, meaning that the grid points for which we have weather are located closer to each other in the Kjeller data. Hence it might not be so unexpected that Kjeller reports more different wind speeds along a line, and therefore has a higher standard deviation within a line. As already seen the variation within a line is for most lines greater for Kjeller than Met. Note however that even though the resolution of the grids differ, one might think that the trend over time for each line still could have been the same for Met and Kjeller. If for instance Met had the same trend in wind speed over time for each tower along a line as Kjeller, only that the values were shifted by a constant term, the standard deviation per tower should have been the same. However, this is at least not the case overall as the mean standard deviation per tower differs quite a bit for several lines, as seen in the previous table.

Using Met and Kjeller data as input to the model

We have seen that there are differences between the weather data from Met and Kjeller. The question is whether this difference in fact has any impact on the forecasting skill of the model. To check this we used Statnett’s model for wind failures to predict the probability of failure throughout the overlap period using both Kjeller and Met data. In addition, we ran the model after adding the mean line difference, the difference then in terms of Kjeller minus Met, to each individual line in the Met data.

To check whether the model predicts well we also defined a baseline prediction that simply predicts the longterm failure rate for all samples.

Aggregated probabilities to make data less unbalanced

In the overlap period there were only 6 failures due to wind in total, among 4 different lines. This makes our dataset extremely unbalanced. A way to make it less unbalanced is to aggregate the hourly probabilities to a daily probability, p_{d}. Assuming independent events, this can be done using the following formula

p_{d} = 1 - \prod_{i=1}^{24} (1 - p_{i})

where p_{i} is the probability of failure in hour i for a given line.

How do we evaluate the predictions?

To evaluate the forecasts produced by the model on Kjeller and Met data, we must use some type of score. There are many options, but we have focused on two widely used scores: The Brier score and the log loss score. The Brier score is an evaluation metric used for evaluating a set of predictions, i.e probabilities of belonging to a certain class. In our case we have a binary classification problem (failure, non-failure) and corresponding predictions for the probability of failure. The Brier score for N predictions in total is formulated as

BS = \frac{1}{N}\sum_{i=1}^{N}(\hat{y}_{i} - y_{i})^{2}, \;\; \hat{y}_{i}\in [0,1], \;y_{i} \in \{0,1\},

where \hat{y}_{i} is the predicted probability of failure and {y}_{i} is 1 if it is a failure and 0 otherwise. The Brier score can be interpreted as the mean squared error of the forecasts. Naturally, we wish to minimize this error, so a small Brier score is desired. Note that the score ranges from 0 to 1, where 0 is the best.

cost_single_sample
The contribution of a sample’s predicted probability on the evaluation score for the logistic loss and Brier score.

Log loss is another evaluation metric we have considered for our problem and it is defined as follows:

LL = -\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}log(\hat{y}_{i}) + (1-y_{i}) log(1-\hat{y}_{i})\right) \;\; , \hat{y}_{i}\in [0,1], \;y_{i} \in \{0,1\}.

Log loss, also known as logistic loss or cross-entropy loss, is also a measure of the error in the prediction, so small numbers are good here as well. Note that the score ranges from 0 to in theory +\infty (in Python’s sklearn library this upper limit is however approximately 34.5 based on the logarithm of machine precision), where 0 is the best. The figure above shows how each prediction contributes to the final Brier score and logistic loss for the case of failure/non-failure for all values of the prediction in the range [0,1]. Both metrics have in common that their contribution tends to zero the closer the predicted probability is to the true class. However, they differ in how much they penalize bad predictions. We use scikit-learn’s logistic loss metric and Brier score.

Also note that with only 6 failures, it is difficult to be very conclusive when evaluating models. Ideally the models should be evaluated over a much longer period.

Kjeller data performs better, but baseline is the best

Below is a figure of how the model performs when making predictions, e.g. using weather forecasts (denoted Met), compared to the baseline forecast. Both hourly and daily values are shown. To be clear, a relative score under 1 beats the same model using hindcast data (e.g. Kjeller data).

First we note that the baseline prediction model beats all others, except for the daily Kjeller prediction using logistic loss. The difference in Brier score for forecasts with Met seems negligible, only 0.96 % and 2.9 % better for hourly and daily scores, respectively. For log loss the difference is notable, with Met 0.19 % and 41 % worse for hourly and daily scores, respectively.

score_different_datasets_cols
The naive baseline model preforms best on most metrics. A prediction beats the model using hindcast data from Kjeller if the score is below 1.

The Brier score for the forecasts where the mean line difference is added to Met is very similar to Kjeller as well. For log loss the score is worse than Kjeller for all cases, however slightly better than Met for daily forecasts when adding the mean difference for each line.

Logistic loss might be more appropriate for us

A relevant question is whether the Brier score and logistic loss are appropriate evaluation metrics for our problem. It is not obvious that small desired changes in the predicted probabilities are reflected in the scores. As mentioned earlier, we have a very large class imbalance in our data, which means that the predicted probabilites are generally very low. The maximum hourly forecasted probabiliy is 4%.  We wish that our evaluation metrics are able to notice increased predicted probabilities on the few actual failures. For instance, how much better would a set of predictions be if the probabilities on the actual failures were doubled. Would the metrics notice this?

To check this we manipulate the probabilities on the actual failures for the set of predicitons based on the Met data, while keeping the rest of the predicted probabilites as they initially were. First, we set all the probabilities on the actual failures equal to 0 and scored this with the Brier score and logistic loss. We do the same several times, writing over the probabilities on the actual failures from 0% to 6% and calculate the scores for each overwrite. This is the range of probabilities in which our model predicts, and we therefore expect that the metrics should decrease as we move from 0% to 6%.

change_true_probs
This figure shows how slowly the brier score decreases (gets better) as the probability on actual failures are overwritten.

The results are shown in the figure above, where the scores are scaled by the score of the actual predictions (the inital ones before overwriting any predicitons). We see that the score for the logistic loss decreases rapidly, compared to the Brier score. For instance, when the probability is manipulated to 2 % for all failures, the resulting decrease is 4 % for the Brier score and over 70% for the logistic loss.

Logistic loss is in some sense more sensitive to change in predicted probabilities, at least for low probabilities. This is a desirable quality for us, because our model does in general give very low probabilities. We want a metric to reflect when the predictions are tuned in the right direction.

Brier score can be fooled by a dummy classifier

We decided to check what would happen if we shuffled all of the predicted probabilities in a random order, and then scored it with the Brier score and logistic loss. Intuitively, one would expect the scores to get worse, but that is not the case for the Brier score. This is weird because the predicted probabilities no longer have any relation with the weather for that line. The only common factor is the distribution of the actual predictions and the shuffled predictions. However we double our score when we use the logistic loss, meaning that it gets much worse.

This shows that a dummy classifier that generates probabilities from a certain distribution would be equally good as a trained classifier in terms of the Brier score – at least on such an unbalanced data set. Logistic loss guards us against this. For this one it is not sufficient to produce probabilities from a given probability distribution and assign them randomly.

suffled_scores
Evaluation score of shuffled hourly and daily probabilities scaled by the unshuffled score.

The share of the failures in our data set is of order 10^{-5} , or 10 ppm. As a result, almost all of the shuffled probabilities line up with a non-failure, as before, and therefore contribute the same to the score as before the shuffle, for both evaluation metrics. The main difference is the shuffled probabilities that line up with the actual failures. Let us say that the model gives a probability of 0.1 % for an actual failure. That sample’s contribution to the Brier score is (1-0.001)^{2} = 0.9801. Almost 97% of the predicted probabilities are 0% for the hourly predictions, so the failure will most likely be lined up with a 0% prediction after the shuffle. That sample’s contribution to the brier score will now be (1-0)^{2} = 1, almost exactly the same. On the other hand, the sample’s contribution to the logistic loss will be -log(0.001) = 6.9 before the shuffle and -log(0)=34.5 after the shuffle. This is because logistic loss has a much steeper curve for low probabilities.

Using scikit-learn to create own model

But enough about Statnett’s model, it is time to create our own model. For this purpose we decided to use scikit-learn’s library. Scikit-learn is very easy to use once you got your data in the correct format, and it has handy functions for splitting the data set into a training and a test set, tuning of hyperparamaters and cross validation. As always, a lot of the job is preprocessing the data. Unfortunately, we only have access to 16 months old data from Met. To be able to compare our model with Statnett’s we would also like to test it on the same overlap period as theirs, namely February to May 2017. Therefore we also needed to train the model on the weather data from Kjeller, which of course might not be ideal when keeping the weather differences between Met and Kjeller in mind. Kjeller data until the end of January 2017 was used for training.

For the training set we have weather data for all towers on a line from 1998 until February 2017, and could have used the avaliable data for a specific tower to be a sample/row in our data matrix. This would however result in a gigantic data matrix with over 5 billion rows. Most of scikit-learn’s machine learning algorithms require that all the training data must fit in memory. This would not be feasible with a matrix of that size. We decided to pull out the minimum, maximum and mean of some of the weather parameters for each line. In addition, we have extracted the corresponding month, region and number of towers. This gave us a matrix of around 37 million rows, a size that fits into our memory.

Logistic regression beats Statnett’s current model

We managed to beat Statnett’s current model. We scored our logistic regression model and Statnett’s model on the Met data from the overlap period. Our score divided by the current model’s score is shown below. Our model is slightly better using the Brier score as the metric, and much better in terms of logistic loss. A reason for why we get a lower logistic loss score is that the current model predicts zero probability of failure on two actual failure times. As earlier mentioned, this is heavily penalized in the logistic loss setting.

Although we beat the current model using these two metrics, it is not sure that ours is superior. If we assume that failures are independent events and use the predicted probabilities in a Monte Carlo simulation, we find that both models overestimate the number of failures. There were 6 wind failures in the test set, but the probabilities generated by the current model suggests 7.4 failures, while ours indicates 8.1. In other words, our model overestimates the number of failures despite the fact that it is better in terms of logistic loss and Brier score.

logreg_relative_score
Performance of logistic regression classifier relative to current model’s score. Less than 1 is better.

Logistic regression finds seaonal trend in wind failures

We can look at the coefficients from the logistic regression (the features are standardized) to see how the different features impact the probability of failure. A positive coefficient means that a large value of that feature increases the probability of failure. The coefficients of the logistic regression are visualized below. We can see that high values of wind speed and temperature increase the classifier’s predicted probability of failure. We also see that the winter months are given positive coefficients, while the summer months are given negative coefficients. Wind failures are more frequent during the winter, so the model manages to identify this pattern.

coefficient_sizes
Importance of the variables in our logistic classifier. Positive importance means that a large variable value increases the predicted probability of failure.

All in all

Did we manage to answer the questions proposed at the very beginning of this blog post? Yes, at least to some extent.

1. How similar is the weather data from Kjeller and Met?

For the weather data comparison we see that there is a difference in the wind speeds reported by Kjeller and Met, where Kjeller tends to report stronger wind than Met. However this is not the case for all lines and there is no “quick fix”. It is of course not ideal to have a short overlap period to use for comparison since we are not even covering all seasons of the year.

2. How good is the model currently in use, developed by the Data Science team?

The current forecast model does not beat a baseline that predicts the long term failure rate – at least for the admittedly very short evaluation period we had data for. In addition, the model performs better on Kjeller data than Met data. This is no surprise considering that the model is trained on Kjeller data.

3. Can we beat today’s model?

Our logistic regression model performs marginally better using the Brier score and a lot better using the logisitic loss. Our model does however overestimate the number of failures in the overlap period.

Reflections from the data science team

The work highlights several challenges with the current data and methodology that we will follow up in further work. The discussion about log-loss vs. Brier scores also highlights a more general difficulty in evaluating forecasting models of very rare events. Some thoughts we have had:

  • There might be erroneously labeled training and test data since there is really no way of knowing the real cause of an intermittent failure. The data is labeled by an analyst who relies on the weather forecast and personal experience. We have seen some instances where a failure has been labeled as a wind failure even though the wind speed is low, while at the same time the lightning index is very high. Such errors make it complicated to use (advanced) machine learning models, while the simpler model we currently employ can help in identifying likely misclassifications in the data.
  • How should we handle forecasts that are close misses in the time domain? E.g. the weather arrives some hours earlier or later than forecasted. Standard loss functions like log-loss or Brier score will penalize such a forecast heavily compared to a “flat” forecast, while a human interpreter might be quite satisfied with the forecast.
  • We should repeat the exercise here when we get longer overlapping data series and more failures to get better comparisons of the models. Six failures is really on the low side to draw clear conclusions.
  • A logistic regression seems like a likely candidate if we are to implement machine learning, but we would also like to explore poisson-regression. When the weather is really severe we often se more failures happening on the same line in a short period of time. This information is discarded in logistic regression while poisson regression is perfectly suited for this kind of data.

 

Simulating power markets with competitive self-play

In this post we present an unpublished article from 2009 that uses self-play to evolve bidding strategies in congested power markets with nodal pricing.

Reinforcement learning using self-play is currently one of the most fascinating areas of research within artificial intelligence. Papers like Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm makes for inspiring reading, and I’m sure we’ll see a host of real world applications in the future. These current trends in reinforcement learning came to mind as an old piece of unpublished research I wrote surfaced some weeks ago. In the article I used a genetic algorithm (GA) – all the rage back in 2009 – to learn the weights in a neural network which provide the strategy for competitive generator companies in a simulated power market. Using this setup I could demonstrate well known Nash equilibria (NE) in common market situations, as well as simulate competition in intractable market setups.

I wrote the article while I worked as a scientific researcher at Sintef Energy AS back in 2009 and the work was financed by Sintef. Also, the topic is not that interesting to pursue as a TSO, however the methods proved interesting enough to write a post about it here. Thanks to Sintef for letting me publish it here!

The entire article draft can be downloaded here

The market participants

The paper starts off with a description of market participants (generators). The generators must bid their capacity into the power exchange. Each generator can behave strategically by adding a markup to the marginal cost, thereby attempting to achieve a higher profit, but at the risk of losing market shares. The markup is the output of the neural network which we will come to.

fig1

For convenience we use a linear marginal cost function with a scalar markup.

Nodal pricing and market clearing

The power exchange collects all the market participant’s bids (this includes demand). It then has to find the intersection of the aggregate supply and demand curves while taking transmission constraints into account. In other words, the power exchange must find the generation by all generators that minimizes the system cost. With linear bids and a linearized version of the power flow equations, this is a quadratic optimization problem.

The problem formulation implies that each node in the power system can have a unique power price (if there are constraints in the network).  The nodal power prices are inferred from the Lagrangian multipliers of the equality constraints; the Lagrangians are the costs of producing one additional MW of energy at a node. This setup roughly reflects several real world power markets.

To solve this quadratic optimization problem we can employ what is called a DC optimal power flow. The Python package pypower handles all this for us. A limited amount of coding should let you add a variable markup to each generators marginal cost function.

After the market is cleared, each participant gets paid the price at the node that it resides, even if it was consumed at a node with a different price. Profit from trade between differently priced nodes, congestion rent, devolves to the system operator.

Evolving strategies using a genetic algorithm

The genetic algorithm (GA) is used to evolve strategies for the market participants. Each market participant has its own pool of n strategies that are evolved using the GA. The aim of each participant is to maximize its profits.

At each generation g of the simulation the participants meet several times, these are the iterations. At each iteration, each participant randomly selects a strategy from its pool of strategies. This strategy is evaluated to yield a bid B_u that is submitted to the market clearing algorithm. The market clearing returns participants profits that are stored in memory. After the desired number of iterations have passed, the fitness of a strategy, s, for unit, u, at generation, g, F_{u,s,g}, is calculated as the sum of profits it received divided by the number of times it was played. Thereafter, selection, crossover, and mutation are applied to produce the new generation of strategies and a new generation is started.

In pseudocode:

  1. Initialize and load power system structure, g = 0
  2. g = g + 1, i = 0
  3. i = i + 1
  4. For each participant u, select a random strategy s from the pool
  5. Find the bid of each participant B_u, given the random strategy
  6. Evaluate the market clearing algorithm given the bids of all players and store the achieved profits
  7. If i < numIterations goto line (3), else:
  8. Calculate the fitness of each strategy F_{u,s,g} as the average of all profits received this generation
  9. Perform selection, crossover and mutation of all strategies
  10. If g < numGenerations goto line (2)
  11. Simulation ends

The number of iterations is selected such that all strategies are played on average ten times each generation. In this paper, the standard number of strategies for each participant was n = 50. In other words, there were 500 iterations per generation.

Representing the strategy as a neural network

A strategy is the rule by which the participants determine their bid, in this case the markup. In mathematical terms, a strategy is a function that takes a set of inputs and returns a decision (markup). This function has parameters that determine the nature of the relationship between inputs and output; it is these parameters that the GA work upon.

Inspired by research on the Iterated Prisoner’s Dilemma (IPD), the inputs to the strategy are taken to be the markup and the price attained when the strategy was last played. The loose resemblance to an IPD, is that the markup is one’s own level of cooperation and the price corresponds to the cooperative level of the competitor(s). Using these two inputs, the strategy essentially has a memory of one round and is able to evolve tit-for-tat strategies and similar, i.e. if you bid high last turn I also continue to bid high this turn. The amount of information taken as input could easily be extended to allow for more sophisticated strategies, maybe this could be an area for recurrent neural networks.

A convenient way of creating a parametrized function, that can represent a strategy as described above, is a multilayer feed-forward neural network. It can approximate any function arbitrarily well, given sufficient neurons. In the article I used a small neural network with one hidden layer. The weights and biases in the neural network were then evolved using the genetic algorithm. A modern approach would maybe use deep q-learning or a similar approach instead.

Perfect competition and co-operative equilibria

The model is primarily used to set up market situations that model classical game theory situations which have been analyzed extensively in the literature. The idea is to show that self-play converges to well-known Nash-equilibria (NE).

The first part of the article shows some rather obvious results:

  • In a market with a lot of small generators and a surplus of capacity, the markup converges to zero
  • In a duopoly where each generator can serve exactly half the load, the markup converges to the highest allowed markup
  • In a duopoly where one generator alone can serve the load (Bertrand competition), the markup converges to zero

Conceptually, Bertrand competition bears some resemblance to the Prisoners Dilemma (IPD): If both generators co-operate by bidding high they get a good profit, however, it is always in the interest of the other to defect and bid marginally lower, thus mutual defection is ensured. In the IPD game it is known that players can evolve strategies that maintain a mutually cooperative level even though the equilibrium for both players is to defect if the game is played only once. In the normal IPD, there are only two choices, either full cooperation or full defection. A common extension of the game is to include several discrete levels of cooperation.

fig5
Market price in Bertrand competition for an initially random population (A), defecting population (B), and co-operative population (C). In all simulations, the marginal cost is 100, and the markup is eventually reduced to zero.

In a paper on the IPD from 2002, the authors noted that the chance of mutual cooperation dwindled as the possible choices in the IPD increased. Considering that each player in the Bertrand game can select any real positive number as her price (which can be thought of as the levels of cooperation), it is simply very unlikely to remain at a mutually cooperative level. This effect can explain why mutual cooperation is likely if the choice of each player is limited to only a few discrete price levels, but highly unlikely if the price can be selected from a continuum.

Other papers on agent based modelling limit the agents to select from a discrete set of choices. If the market situation resembles Bertrand competition, the above discussion might lead to think that mutual co-operation can occur, albeit with a small probability if the number of choices is large. However, it turns out that the similarity of the Bertrand and Prisoners Dilemma game disappears if the number of choices is limited. In fact, the outcome of simulated Bertrand competition with discrete levels of cooperation is quite arbitrary and depends on the exact markups the players can choose from.

Competition with capacity constraints

With capacity constraints, one generator alone can not serve the entire market like in Bertrand competition. The participant that bids higher than its opponent still captures some residual demand. A market player can thus follow two strategies:

  1. Try to undercut the other player to get a large market share and risk low prices, or
  2. set the markup high to make sure the market clearing price remains high and accept that the other player gets the larger market share.

In this case there exists a price where a market player is indifferent as to which direction it should diverge as long as the opponent sticks to its price. The price of indifference occurs when undercutting the opponent gives the exact same profit as bidding higher than the opponent and capturing the residual demand.

The price of indifference depends on the exact market setup. In the simulation below, the price of indifference was 160. The bid of player 1 stays high (to maximize profits) while the bid of player 2 stays below the threshold of 160. Due to random mutation in the genetic algorithm, the values don’t stabilize entirely. Who ends up as the high bidder, and thus worse of, is entirely random.

fig7
The two players undercut one another until, at the threshold price, player 1 starts increasing its markup.

Duopoly in a congested two-bus network

So far, the underlying electrical network has been kept uncongested to enable analysis of some well known economic games. In this section, the simplest possible constrained network will be analysed theoretically and simulated with the developed model. The network is depicted in the figure below. It consists of two buses with a generator and load each; the buses are connected by a transmission line capable of transferring K MW in either direction.

fig9
Two bus network with constrained transmission capacity

Moreover, the transmission capacity K is less than the demand at either of the buses and the generators are capable of serving the load at their own bus and at the same time utilize the entire transmission capacity.

Accordingly, the generators face a strategic choice:

  1. Bid high to raise the price at its bus and serve the load that the opponent cannot capture because of the limited transmission capacity; or,
  2. Bid low and serve all the load it possibly can cover.

At fi rst, this might seem like the Bertrand game with capacity constraints. However, there is one important difference: since the generators are paid the nodal price (and not a common system price), the low bidder can profitably raise its price towards that of the high bidder. The consequence is that there is no NE in pure strategies, resulting in a cyclical variation in the price over the course of a simulation.

fig11
Simulated markups in the two-bus network for 4 different transmission capacity levels K

Competition in a meshed and congested grid

This section presents results from an analysis of a small benchmark power system presented in a paper on agent based modelling. There are 3 strategically behaving generators in the system and all load is assumed to to be completely inelastic. All transmission lines are assumed to have sufficient transmission capacity with the exception of the line from bus 2 to bus 5 which is limited to 100 MW.

fig12
Power system diagram. The numbers close to the lines are reactances in per unit.
tab1
Generator data

The authors in the original paper analytically computed the NE and simulated the system using adaptive agents with a discrete set og markups. Two symmetrical NE were found; generator 3 should always bid its highest markup while either generator 1 or 2 bids with the highest markup and the other bids its marginal cost. The agent-based approach did discover the NE, but alternated between the two equilibria in a cyclical fashion. Here, the markups can be any real number on the interval [0; 30], otherwise the simulation setup is identical to that of the original work.

I find the same NE, however no cyclical behavior occurs. As soon as the agents have locked in on one NE the game stays in that equilibrium for the rest of the simulation. To discover several equilibria, the model must be rerun with random initial conditions.

fig13
Simulated markups for the three generators and corresponding consumer benefit, producer benefit and congestion rent on the right

The figure above shows simulated simulated markups on the left and the corresponding consumer benefit (about 2500), producer bene fit (15000) and congestion rent on the right. If all producers had bid according to their marginal cost, we would have achieved the socially optimal solution, which has producers benefit at 1056, and consumers benefit at 17322. Accordingly there has been a large transfer of benefit from consumers to producers due to market power.

If we double the capacity on the line between bus 2 and 5, market power is mostly eradicated. In addition, the total economic benefit to society increases slightly.

Ideas for further work

Even with the simple setup used in this work it is possible to evolve interesting strategies. However, the field of reinforcement learning has evolved a lot since 2009. One idea for further development would be to re-implement the ideas presented here using modern frameworks for reinforcement learning like open Ai gym and Tensorforce. Such a setup could possibly be used to evolve cooperative strategies with longer memory as well as simulate time-varying market situations (like in the real world).

 

Comparing javascript libraries for plotting

After some research on available javascript libraries for plotting we decided that plotly fits our needs the best. We also provide a small vue-wrapper for plotly.

Plotly.js renders fast and has a large selection of visualizations

We frequently plot quite large amounts of data and want to be able to interact with the data without long wait times. I use the term large data, and not big data, since we are talking about a typical dataset size on the order of some 100k datapoints. This is not really big data, but it is still large enough that many plotting packages have trouble with rendering, zooming, panning etc. Ideally we would like render times below about 500 ms.

To find the ideal plotting package we set up a small test with some promising plotting packages: Echarts, Highcharts, Plotly and Bokeh. We plotted around 300k datapoints from a timeseries and noted the render times in ms. The chart below displays the render times in milliseconds from five repetitions.Echarts and bokeh are canvas based, and render much slower than the other packages which are SVG based.

Additionally, we find our selves wanting a large selection of visualization types. Classic bar chart, line charts etc. just is not enough. Incidentally, plotly also offers a wide range of charts. Plotly is also available as a python and R package so we can build experience in one plotting package independently of programming language.

Vue-wrapper for plotly

One of our main front-ends uses the vue.js framework. To integrate plotly cleanly we created a vue-wrapper for plotly. Check it out on github:

A vue wrapper for plotly.js chart library
https://github.com/statnett/vue-plotly
13 forks.
29 stars.
10 open issues.
Recent commits:

 

 

Setting up a forecast service for weather dependent failures on power lines in one week and ten minutes

Combining Tableau, Python, Splunk and open data from met.no to deliver a realtime forecast of the probability of failure due to wind and lightning on overhead lines.

Our last post was about estimating the probability of failure for overhead lines based on past weather and failure statistics. The results are currently used as part of our long term planning process, by using a Monte Carlo tool to simulate failures in the Norwegian transmission system.

In this post you can read about how we re-used this model for a different use case: using the current weather forecast to predict probability of failure per overhead line. This information is very relevant for our system operators when preparing for severe weather events. We use data from the Norwegian Meteorological Institute, Tableau for visualization, Python for our backend and Splunk for monitoring. This post explains how.

Overview of the forecast service

The core delivery of the mathematical model described in our previous blog post are fragility curves per overhead line. To recall from the previous post, those fragility curves are constructed based on historical failures on overhead line combined with historical weather data for that line. The resulting fragility curves give us the relation between the weather event and the expected probability of failure. In our current model we have two fragility curves per line: one that describes the expected failure due to wind vs wind exposure, and one that models the expected failure due to lightning vs lightning exposure.

Now, when we have this information per overhead line, we can combine this with weather forecast data to forecast expected failure rates for all overhead lines. Our service downloads the latest weather forecast when it becomes available, updates the expected failure probabilities, and exports the results to file. We designed a Tableau dashboard to visualize the results in an interactive and intuitive manner.

The resulting setup is displayed below:

flow diagram service

In the remainder, we cover how we created and deployed this service.

Data input: fragility curves

See our previous blog post for an explanation of the creation of the fragility curves.

Data input: open weather data

The Norwegian Meteorological Institute (met.no) maintains a THREDDS server where anyone can download forecast data for Norway. THREDDS is an open source project which aims to simplify the discovery and use of scientific data. Datasets can be served through OPeNDAP, OGC’s WMS and WCS, HTTP, and other remote data access protocols. In this case we will use OPeNDAP so we don’t have to download the huge forecast files , only the variables we are interested in.

The first thing to notice is that we select the latest deterministic forecast meps_det_extracted_2_5km_latest.nc and load this using xarray. An alternative here would have been to download the complete ensamble forecast to get a sense of the uncertainty in the predictions as well. Next, we select wind_speed as the variable to load. Then we use the scipy.spatial.cKDTree package to find the nearest grid point to the ~30 000 power line towers and load the corresponding wind speed.

Forecast service: service creation and monitoring

We have developed both the probability model and the forecast service in Python. The forecasts are updated four times a day. To download the latest forecast as soon as it becomes available we continuously run a Windows process using Pythons win32service package. Only a few lines of code were necessary to do so:

The actual logic of obtaining the fragility curves, downloading new forecasts, and combining this into the desired time series is included in the download_service64Bit module.

To ensure the service behaves as desired, we monitor the process in two ways. Firstly, we use Splunk to monitor the server’s windows event log to pick up any unexpected crashes of the service. The team will receive an email within 15 minutes of an unexpected event. Secondly, we will develop more advanced analytics of our log using Splunk. This is made very easy when using Python’s excellent logging module (a great introduction on why and how you should use this module can be found here).

Publishing results to Tableau

We use Tableau as a quick and easy way to create interactive and visually attractive dashboards. An example screenshot can be seen below:

full

In the top left corner, the information about the system state is shown. The percentage is the total probability of at least one line failing in the coming forecast period of three days. The dashboard shows the situation just before a major storm in January this year, leading to a very high failure probability. There actually occurred seven failures due to severe wind two days after the forecast. The bar plot shows which part of the estimated failure probability is due to wind versus lightning.

topleft
Overview of main system state

The main graph shows the development of failure over time for the whole power system. When no specific line is selected, this graph shows the development of the failure probability in the whole system over time:timechart

When one line is selected, it shows the details for this line: wind speed or lightning intensity, and estimated failure probability over time.

dd

The heat map at the bottom of the screen shows the probability of failure per overhead line per hour.

heatmap-1

The list of lines is ordered, listing the lines that are most likely to fail on top. This provides the system operator with a good indication of which lines to focus on. The heat map shows how the failure probability develops over time, showing that not all lines are exposed maximally at the same time. The wind/lightning emoji before the name of the line indicates whether that line is exposed to high wind or lightning.

Going from idea to deployment in one week and ten minutes

This weeks goal was to go from idea to deployment in one week by setting aside all other tasks. However, as Friday evening came closer, more and more small bugs kept popping up both in the data visualization as well as in the back-end.

IMG_2029
Friday afternoon debugging

In the end, we got the service at least 80% complete. To get the last things in place we might need just 10 minutes more…

Estimating the probability of failure for overhead lines

In Norway, about 90 percent of all temporary failures on overhead lines are due to weather. In this post, we present a method to model the probability of failures on overhead lines due to lightning.

Welcome to the blog for Data Science in Statnett, the Norwegian electricity transmission system operator. We use data science to extract knowledge from the vast amounts of data gathered about the power system and suggest new data-driven approaches to improve power system operation, planning and maintenance. In this blog, we write about our work. Today’s topic is a model for estimating the probability of failure of overhead lines.

Knowing the probability of failure is central to reliability management

For an electricity transmission system operator like Statnett, balancing power system reliability against investment and operational costs is at the very heart of our operation. However, a more data-driven approach can improve on the traditional methods for power system reliability management. In the words of the recently completed research project Garpur:

Historically in Europe, network reliability management has been relying on the so-called “N-1” criterion: in case of fault of one relevant element (e.g. one transmission system element, one significant generation element or one significant distribution network element), the elements remaining in operation must be capable of accommodating the new operational situation without violating the network’s operational security limits.

Today, the increasing uncertainty of generation due to intermittent energy sources, combined with the opportunities provided e.g. by demand-side management and energy storage, call for imagining new reliability criteria with a better balance between reliability and costs.

In such a framework, knowledge about failure probabilities becomes central to power system reliability management, and thus the whole planning and operation of the power system. When predicting the probability of failure, weather conditions play an important part; In Norway, about 90 percent of all temporary failures on overhead lines are due to weather, the three main weather parameters influencing the failure rate being wind, lightning and icing. In this post, we present a method to model the probability of failures on overhead lines due to lightning. The full procedure is documented in a paper to PMAPS 2018. In an upcoming post we will demonstrate how this knowledge can be used to predict failures using weather forecast data from met.no.

Data sources: failure statistics and weather data

Statnett gathers failure statistics and publishes them annually in our failure statistics. These failures are classified according to the cause of the failure. For this work, we considered 102 different high voltage overhead lines. For these there have been 329 failures due to lightning in the period 1998 – 2014.

We have used renanalysis weather data computed by Kjeller Vindteknikk. These reanalysis data have been calculated in a period from january 1979 until march 2017 and they consist of hourly historical time series for lightning indices on a 4 km by 4 km grid. The important property with respect to the proposed methods, is that the finely meshed reanalysis data allows us to use the geographical position of the power line towers and line segments to extract lightning data from the reanalysis data set. Thus it is possible to evaluate the historical lightning exposure of the transmission lines.

Lightning indices

The first step is to look at the data. Lightning is sudden discharge in the atmosphere caused by electrostatic imbalances. These discharges occur between clouds, internally inside clouds or between ground and clouds. There is no atmospheric variable directly associated with lightning. Instead, meteorologists have developed regression indices that measure the probability of lightning. Two of these indices are linked to the probability of failure of an overhead line. The K-index and the Total Totals index. Both of these indices can be calculated from the reanalysis data.

figur1
Figure 1 Rank and K index for lightning failures

Figure 1 shows how lightning failures are associated with high and rare values of the K and Total Totals indices, computed from the reanalysis data set. For each time of failure, the highest value of the K and Total Totals index over the geographical span of the transmission line have been calculated, and then these numbers are ranked among all historical values of the indices for this line. This illustrates how different lines fail at different levels of the index values, but maybe even more important: The link between high index values and lightning failures is very strong. Considering all the lines, 87 percent of the failures classified as “lightning” occur within 10 percent of the time. This is promising…

figur3
Figure 2: TT index versus K index shows seasonality trend

In Norway, lightning typically occurs during the summer in the afternoon as cumulonimbus clouds accumulate during the afternoon. But there is a significant number of failures due to thunderstorms during the rest of the year as well, winter months included. To see how the indices, K and T T , behave for different seasons, the values of these two indices are plotted at the time of each failure in Figure 3. From the figure it is obvious, though the data is sparse, that there is relevant information in the Total Totals index that has to be incorporated into the probability model of lightning dependent failures. The K index has a strong connection with lightning failures in the summer months, whereas the Totals Totals index seems to be more important during winter months.

Method in brief

The method is a two-step procedure: First, a long-term failure rate is calculated based on Bayesian inference, taking into account observed failures. This step ensures that lines having observed relatively more failures and thus being more error prone will get a relatively higher failure rate. Second, the long-term annual failure rates calculated in the previous step are distributed into hourly probabilities. This is done by modelling the probabilities as a functional dependency on relevant meteorological parameters and assuring that the probabilities are consistent with the failure rates from step 1.

Bayesian update

From the failure statistics we can calculate a prior failure rate \lambda due to lightning simply by summing the number of failures per year and dividing by the total length of the overhead lines. We then arrive at a failure rate per 100 km per year. This is our prior estimate of the failure rate for all lines.

When we observe a particular line, the failures arrive in what is termed a Poisson process. When we assume that the failure rate is exponentially distributed, we arrive at a convenient expression for the posterior failure rate \lambda^B:

\lambda^B = \frac{1 + \sum{y_i}}{\frac{1}{\lambda} + n}

Where n is the number of years with observations, \lambda is the prior failure rate and y_i is the number of observed failures in the particular year.

bayes
Figure 3: The prior and the posterior distribution. Dashed vertical lines are the
expectations.

Distributing the long term failure rates over time

We now have the long-term failure rate for lightning, but have to establish a connection between the K-index, the Totals Totals index and the failure probability. The goal is to end up with hourly failure probabilities we can use in monte-carlo simulations of power system reliability.

The dataset is heavily imbalanced. There are very few failures (positives), and the method has to account for this so we don’t end up predicting a 0 % probability all the time. Read a good explanation of learning from imbalanced datasets in this kdnuggets blog.

Many approaches could be envisioned for this step, including several variants of machine learning. However, for now we have settled on an approach using fragility curves which is also robust for this type of skewed/biased dataset.

A transmission line can be considered as a series system of many line segments between towers. We assume that the segment with the worst weather exposure is representable for the transmission line as a whole.

We then define the lightning exposure at time t:

w^t = \alpha_K \max(0, K^t_{max} - K_{\text{thres}})^2 + \alpha_{TT} \max(0, TT^t_{max} - TT_{\text{thres}})^2

Where \alpha_K, \alpha_{TT} are scale parameters, K_{max}^t is the maximum K index along the line at time t, TT_{max}^t is the maximum Total Totals index at time t along the line. K_{\text{thres}}, TT_{\text{thres}} are threshold values for the lightning indices below which the indices has no impact on the probability.

Each line then has an probability of failure at time t given by:

p_L^t = F(w^t; \sigma_L, \mu_L)

where F(\cdot) is the cumulative log normal function.

To find the standard deviation and expected value that describe the log normal function, we minimize the following equation to ensure that the expected number of failures equals the posterior failure rate:

\mu_L, \sigma_L = \underset{\mu, \sigma}{\text{argmin}} \: g(p^t_L; \mu, \sigma)

where

g(p^t_L; \mu, \sigma) = \left(\lambda^B - \frac{1}{k}\sum_{t=0}^T p^t_L\right)^2

If you want to delve deeper into the maths behind the method we will present a paper at PMAPS 2018.

Fitting the model to data

In this section simulation results are presented where the models have been applied to the Norwegian high voltage grid. In particular 99 transmission lines in Norway have been considered, divided into 13 lines at 132 kV, 2 lines at 220 kV, 60 lines at 300 kV and 24 lines at 420 kV. Except for the 132 and 220 kV lines, which are situated in Finnmark, the rest of the lines are distributed evenly across Norway.

The threshold parameters K_{\text{thres}} and TT_{\text{thres}} have been set empirically to K_{\text{thres}} = 20.0 and TT_{\text{thres}} = 45.0. The two scale parameters \alpha_K and \alpha_{TT} have been set by heuristics to \alpha_K = 0.88 and \alpha_{TT} = 0.12, to reflect the different weights of the seasonal components.

Results

The probability models presented above are being used by Statnett as part of a Monte Carlo tool to simulate failures in the Norwegian transmission system for long term planning studies. Together with a similar approach for wind dependent probabilities, we use this framework as the basic input to these Monte Carlo simulation models. In this respect, the most important part of the simulations is to have a coherent data set when it comes to weather, such that failures that occur due to bad weather appear logically and consistently in space and time.

figur4
Figure 4: Seasonal differences in K index and TT index for simulated results

Figure 4 shows how the probability model captures the different values of the K index and the Total Totals index as the time of the simulated failures varies over the year. This figure should be compared with figure 2. The data in Figure 4 is one out of 500 samples from a Monte Carlo simulation, done in the time period from 1998 to 2014.

The next figures show a zoomed in view of some of the actual failures, each figure showing how actual failures occur at time of elevated values of historical probabilities.

figur5figur6figur7figur8