Is bid filtering effective against network congestion?

Earlier this year, I wrote an introduction to the bid filtering problem, and explained how my team at Statnett are trying to solve it. The system we’ve built at Statnett combines data from various sources in its attempt to make the right call. But how well is it doing its job? Or, more precisely, what is the effect on network congestion of applying our bid filtering system in its current form?

Kyoto. Photo: Belle Co

Without calling it a definitive answer, a paper I wrote for the CIGRE Symposium contains research results that provide new insight. The symposium was in Kyoto, but a diverse list of reasons (including a strict midwife) forced me to leave the cherry blossom to my imagination and test my charming Japanese phrases from a meeting room in Trondheim.

A quick recap

European countries are moving toward a new, more integrated way of balancing their power systems. In a country with highly distributed electricity generation, we want to automatically identify power reserves that should not be used in a given situation due to their location in the grid. If you would like to learn the details about the approach, you are likely to enjoy reading the paper. Here is the micro-version:

To identify bids in problematic locations, we need a detailed network model, we try to predict the future situation in the power grid, and then we apply a nodal market model which gives us the optimal plan for balancing activations for a specific situation. But since we don’t really know how much is going to flow into or out of the country, we optimize many times with different assumptions on cross-border flows. Each of the exchange scenarios tells its own story about which bids should -and shouldn’t- be activated. The scenarios don’t always agree, but in the aggregate they let us form a consensus result, determining which bids will be made unavailable for selection in the balancing market.

An unfair competition

Today, human operators at Statnett select power reserves for activation when necessary to balance the system, always mindful of their locations in the grid and potential bottlenecks. Their decisions on which balancing bids to activate – and not activate – often build years of operational experience and an abundance of real-time data.

Before discussing whether our machine can beat the human operators, it’s important to keep in mind that the bid filtering system will take part in a different context: the new balancing market, where everyday balancing will take place without the involvement of human operators. This will change the rules of the balancing game completely. While human operators constantly make a flow of integrated last-minute decisions, the new automatic processes are distinct in their separation of concerns and must often act much earlier to respect strict timelines.

Setting up simulations

The quantitative results in our paper come from simulating one day in the Norwegian power grid, using our detailed, custom-built Python model together with recorded data. The balancing actions -and the way they are selected- are different between the simulations.

The first simulation is Historical operation. Here, we simply replay the historical balancing decisions of the human operators.

The second simulation is Bid filtering. Here, we replace the historical human decisions with balancing actions selected by a zonal market mechanism that doesn’t see the internal network constraints or respect the laws of physics. The balancing decisions will often be different from the human ones in order to save some money. But before the market selects any bids, some of them are removed from the list by our bid filtering machine in order to prevent network congestion. We try not to cheat, the bid filtering takes place using data and forecasts available 30 minutes before the balancing actions take effect.

The third simulation is No filtering. Here we try to establish the impact on congestion of moving from today’s manual, but flexible operation to zonal, market-based balancing. This simulation is a parallel run of the market-based selection, but without pre-filtering any bids, and it provides a second, possibly more relevant benchmark.

Example from 09:30 on August 25, 2021. Red cells are balancing bids made unavailable in the bid filtering simulation. As a result, the market-based balancing will not select exactly the same bids in the Bid filtering scenario (black dots) and the No filtering scenario (white dots).

Power flow analyses

The interesting part of the simulation is when we inject the balancing decisions into the historical system state and calculate all power flows in the network. Comparing these flows to the operational limits reveals which balancing approaches are doing a better job at avoiding overloads in the network.

Example from 09:30 on August 25, 2021 showing reliability limits. Reliability limits in Norway restrict the flow on a combination of transmission lines, so-called Power Transfer Corridors (PTCs). These 13 PTC constraints are violated in one or more of the simulations.

The overloads are similar between the simulation, but they are not the same. To better understand the big picture, we created a congestion index that summarizes the resulting overload situation in a single value. The number doesn’t have any physical interpretation, but gives a relative indication of how severe the overload situation is.

Congestion index for reliability limits in the Norwegian system from August 25, 2021

When we run the simulation for 24 historical hours, we see that with market-based balancing, there would be overloads throughout the day. When we apply bid filtering and remove the bids expected to be problematic, overloads are reduced in 9 of the 24 hours, and we’re able to avoid the most serious problems in the afternoon.

No matter the balancing mechanism, the congestion index virtually never touches zero. Even the human operators with all their extra information and experience run into many of the same congestion problems. This shows that balancing activations play a role in the amount of congestion, but they are just one part of the story, along with several other factors.

With that in mind, if you’re going to let a zonal market mechanism decide your balancing decisions, it seems that bid filtering can have a clear, positive effect in reducing network overloads.

What do you think? Do you read the results differently? Don’t be afraid to get in touch, my team and I are always happy to discuss.

ありがとうございました

Smarter Transmission Grid Capacities with Weather Data

We are looking into sensor and weather based approaches to operating the grid more efficiently using dynamic line rating

This winter, the electricity price in the south of Norway has reached levels over ten times higher than the price in the north. These differences are a result of capacity constraints and not surprisingly, Statnett is considering to increase the grid capacity. An obvious way would be to build new transmission lines, but this is very expensive and takes years to complete. The other option is to utilize the existing grid more efficiently.

What factors are limiting the transmission capacity? How do we estimate these limits, and can more data help us get closer to their true values? Together with some of my colleagues at the Data science team, I’ve been putting weather data to work in an attempt to calculate transmission capacities in a smarter way. Because if we can safely increase the capacities of individual transmission lines, we might be able to increase the overall capacity between regions as well.

Too hot to handle

Transmission capacity is limited for a number reasons. If too much power flows through this single transmission line that connects a power producer and a consumer, there are at least three things that could go really wrong:

Drawing of a wind turbine connected to a house through a transmission line. There are three transmission towers between the turbine and the house.
A wind power plant connected to a consumer through a transmission line. The line consists of three towers and four spans.
  1. The voltage at the consumer could drop below acceptable levels
  2. The conductor could suffer material damage if the temperature goes beyond its thermal limit
  3. The conductor could sag too close to the ground as it expands with higher temperature

The sagging problem must be taken seriously to avoid safety hazard and wildfire risk, and regulations specify a minimum allowed clearance from the conductor to the ground.

Statnett calculates a thermal rating for each line in the transmission grid. These ratings restrict the power flow in the network in an attempt to always repect thermal limits and minimum clearance regulations. Today, these calculations rely on some consequential simplifications, and the resulting limits are generally considered to be on the conservative side.

Fresh weather data enables a different kind of calculation of these limits, and the ambition of our current work is to increase the current-carrying capacity, or ampacity of Norwegian transmission lines when it’s safe to do so. But before attempting to calculate better limits indirectly, using weather data, we first ran a project to estimate ampacity directly, using sensors.

Estimating ampacity with sensors

A dynamic line rating (DLR) sensor is a piece of hardware that provides ampacity values based on measurements. The sensor cannot measure the ampacity directly, it is only able to make an estimate based on what it observes, which is typically the sag or clearance to ground.

Image taken from the pylon shows a DLR sensor attached to the conductor. There is also a weather station in the pylon. There are large trees and houses in the background.
One of the spans where we have tested DLR sensors.

We recently tested and evaluated DLR sensors from three different suppliers. The results were mixed, we experienced large ampacity deviations between the three.

DLR sensors come at a cost, and the reach of a sensor is limited to the single span where it is located. This is the motivation behind our indirect approach to calculate ampacity ratings, without using the DLR sensors.

Calculating the ampacity

Image shows a span affected by current, air temperature, wind and the sun. Sag and clearance are illustrated: The sag is the maximum distance a conductor has sagged below the suspension point. The clearance is the distance between the line's lowest point and the ground. Regulations specify the minimum allowed clearance.
A typical span. Regulations specify the minimum allowed clearance between the ground and the conductor.

In our indirect model, we base our calculations on the method described in CIGRE 601 Guide for thermal rating calculations of overhead lines to calculate ampacity. In short, we need to solve a steady-state heat balance equation that accounts for joule losses, solar heating, convective cooling and radiative cooling:

Joule losses heat the conductor and increase as the current through the conductor increases. The losses are caused by the resistive and magnetic properties of the conductor.

Solar radiation heats the conductor and dependens on the solar position, which varies with the time of day and date of year. Cloud cover, the orientation of the line, the type of terrain on the ground and the properties of the conductor also influence the heating effect.

Convection cools the conductor. The effect is caused by the movement of the surrounding air and increases with the temperature difference between the conductor and the air. Forced convection on a conductor depends on the wind speed and the wind direction relative to the orientation of the span. A wind direction perpendicular to the conductor has the largest cooling effect. And even without wind, there will still be natural convection.

Radiation cools the conductor. The temperature difference between the conductor temperature and the air temperature causes a transmission of heat from the conductor to the surroundings and the sky.

Convective cooling and joule heating have the most significant effect on the heat balance, as the figure below shows. The solar heating and the radiative cooling are smaller in magnitude.

As it turns out, many of the parameters in the equation are weather-dependent. In the following sections I will explain three indirect models that go to different lengths of applying weather data when calculating thermal ratings.

The simplest weather-based ampacity calculation: a static seasonal limit

Historically, a simple and common approach to the problem has been to establish seasonal ampacity ratings for a transmission line by considering the worst-case seasonal weather conditions of the least advantageous span. This approach can lead to limits such as:

  • Summer: 232 A
  • Spring and autumn: 913 A
  • Winter: 1913 A

Such a model requires very little data, but has the disadvantage of being too conservative, providing a lower ampacity than necessary in all but the worst-case weather conditions. The true ampacities would vary over time.

Including air temperature in the calculation

Statnett includes air temperatures in their ampacity calculations. The temperatures can come from measuring units or a forecast service. This approach is vastly more sophisticated than the static seasonal limits, but there are still many other factors where Statnett resorts to simplifying assumptions.

The wind speed and wind direction are recognized as the most difficult to forecast. The wind can vary a lot along the length of a span and over time. To overcome the modeling challenge, the traditional Statnett solution has been to assume constant wind speed from a fixed wind direction. The effect of solar heating is also reduced to a worst-case values for a set of different temperatures. The end result is ampacity values that depend only on the air temperature. The resulting ampacities are believed to be conservative, especially since the solar heating is often lower than the worst-case values.

However, counterexamples can be made, where the true ampacity would be lower than the values calculated by Statnett. Combinations of low wind speed, unfavourable wind direction and solar heating close to worst-case can happen in rare cases, especially for spans where winds are shielded by terrain. And of course, errors in the input air temperature data can also give a too high ampacity.

Including more weather parameters

Our full weather-based model not only considers air temperature, but also wind speed, wind direction, date and time (to calculate solar radiation) and the configuration of the transmission line span.

Applying the wind speed and direction data is not straightforward. And at the same time, the consequences of incorrect ampacities are asymmetric. The additional capacity can never justify clearance violations or material damage of the conductor, so being right on average may not be a good strategy.

Comparing the models

We compared the ampacity from DLR sensors to the three indirect, weather-based calculation approaches for a specific line in the summer, autumn and winter. The DLR sensor estimate should not be accepted as the true ampacity, but is considered to be the most accurate and relevant point of reference.

In summer, the static seasonal limit results in low utilization of the line and low risk, as expected. The current Statnett approach allows higher utilization, but without particularly high risk, since the limits are still conservative compared to the DLR sensor estimate. The weather-based approach overestimates the ampacity most of these four days in July, but drops below the DLR sensor estimate a few times.

Summer period.

Our autumn calculations tell a similar story. However, the difference between the current Statnett approach and the DLR sensors is even larger, indicating high potential for capacity increases with alternative methods.

Autumn period.

In winter, the static seasonal rating and the current Statnett approach are still more conservative that the DLR sensor, but the weather-based estimate is more conservative than the estimate of the DLR sensor in the chosen period.

Winter period.

The static seasonal limits and Statnett’s air temperature dependent limits are consistently more conservative than the approach with a DLR sensor. Our weather-based approach, on the other hand, give too optimistic ampacity ratings, with a few exceptions.

We think we have an idea why.

Weather-based models require high quality weather data or forecasts

For all seasons, the weather-based approach stands out with its large ampacity variations. It is great that our model better reflects the weather-driven temperature dynamics, but the large variations also show the huge impact weather parameters (and their inaccuracies) have on the calculations.

The weather data we used to calculate ampacity were to a large extent forecasts, and even the best forecasts will be inaccurate. But even more imporantly: the weather data were never calibrated to local conditions. This will overestimate the cooling effect under some conditions. In reality, terrain and vegetation in the immediate surroundings of a power line can deflect much of the wind.

To calculate more accurate weather-based limits, we believe the weather parameters from a forecast cannot be used directly without some form of adjustment.

For the future, we would like to find a way to make these adjustments, such that we can increase the ampacity compared to Statnett’s current approach while reducing the risk of overestimation. Additionally, The Norwegian Meteorological Institute provides an ensemble forecast that gives additional information about the uncertainty of the weather forecast. In future work, we would like to use the uncertainty information in our indirect weather-based model.

Using data to handle intra-zonal constraints in the upcoming balancing market

Together with almost half of the Data science group at Statnett, I spend my time building automatic systems for congestion management. This job is fascinating and challenging at the same time, and I would love to share some of what our cross-functional team has done so far. But before diving in, let me first provide some context.

A good day at Statnett’s control centre. Photo: Trond Isaksen

The balancing act

Like other European transmission system operators (TSOs), Statnett is keeping the frequency stable at 50 Hz by making sure generation always matches consumption. The show is run by the human operators in Statnett’s control centre. They monitor the system continuously and instruct flexible power producers or consumers to increase or decrease their power levels when necessary.

These balancing adjustsments are handled through a balancing energy market. Flexible producers and consumers offer their reserve capacity as balancing energy bids (in price-volume pairs). The operators select and and activate as many of them as needed to balance the system. To minimize balancing costs, they try to follow the price order, utilizing the least expensive resources first.

While busy balancing the system, control centre operators also need to keep an eye on the network flows. If too much power is injected in one location, the network will be congested, meaning there are overloads that could compromise reliable operation of the grid. When an operator realizes a specific bid will cause congestion, she will mark it as unavailable and move on to use more expensive bids to balance the system.

In the Norwegian system, congestion does not only occur in a few well-known locations. Due to highly distributed generation and a relatively weak grid, there are hundreds of network constraints that could cause problems, and the Norwegian operators often need to be both careful and creative when selecting bids for activation.

A filtering problem

The Nordic TSOs are transitioning to a new balancing model in the upcoming year. A massive change is that balancing bids will no longer be selected by humans, but by an auction-like algorithm, just as in many other electricity markets. This algorithm (unfortunately, but understandably) uses a highly aggregated zonal structure, meaning that it will consider capacity restrictions between the 12 Nordic bidding zones, but not within them.

Consequently, the market algorithm will disregard all of the more obscure (but still important) intra-zonal constraints . This will -of course- lead to market results that simply do not fit inside the transmission grid, and there is neither time nor opportunity after the market clearing to modify the outcome in any substantial way.

My colleagues and I took the task of creating a bid filtering system. This means predicting which bids would cause congestion, and mark them as unavailable to prevent them from being selected by the market algorithm.

Filtering the correct bids is challenging. Network congestions depend on the situation in the grid, which is anything but constant due to variations in generation, consumption, grid topology and exchange with other countries. The unavailability decision must be made something like 15-25 minutes before the bid is to be used, and there is plenty of room for surprises during that period.

How it works

Although I enjoy discussing the details, I will give a only a short summary of the system works. To decide the availability of each bid in the balancing market, we have created a Python universe with custom-built libraries and microservices that follows these steps

  1. Assemble a detailed model of the Norwegian power system in its current state. Here, we combine the grid model from Statnett’s equpment database and combine it with fresh data from Statnett’s SCADA system.
  2. Adjust the model to reflect the expected state.
    Since we are looking up to 30 minutes ahead, we offset the effect of current balancing actions, and apply generation schedules and forecasts to update all injections in the model.
  3. Prepare to be surprised.
    To make more robust decisions in the face of high uncertainty in exchange volumes, we even apply a hundred or more scenarios representing different exchange patterns on the border.
  4. Find the best balancing actions for each scenario of the future.
    Interpreting the results of an optimal power flow calculation provides lots of insight into which bids should be activated (and which should not) in each exchange scenario.
  5. Agree on a decision.
    In the final step, the solution from each scenario is used to form a consensus decision on which bids to make unavailable for the balancing market algorithm.

An example result and how to read it

My friend and mentor Gerard Doorman recently submitted a paper for the 2022 CIGRE session in Paris, explaining the bid filtering system in more detail. I will share one important figure here to illustrate the final step of the bid filtering method. The figure shows the simulated result of running the bid filtering system at 8 AM on August 23, 2021.

Simulated bid filtering results from August 23, 2021. Bids are sorted horizontally, according to price, and grouped by their bidding zone (NO1 through 5) and direction (up/down).

Before you cringe from information overload, let me assist you by explaining that the abundance of green cells in the horizontal bar on top shows that the vast majority of balancing bids were decided to be available.

There are also yellow cells, showing bids that likely need to be activated to keep the system operating within its security limits, no matter what happens.

The red cells are bids that have been made unavailable to prevent network congestion. To understand why, we need to look at the underlying results in the lower panel. Here, each row presents the outcome of one scenario, and purple cells show the bids that were rejected, i.e. not activated in the optimal solution, although being less expensive than other ones that were activated for balancing in the same scenario (in pink).

The different scenarios often do not tell the same story, a bid that is rejected in one scenario can be perfectly fine in the next, it all depends on the situation in the grid and which other bids are also activated. Because of this ambiguity, business rules are necessary to create a reasonable aggregate result, and the final outcome will generally be imperfect.

So, does this filtering system have any postive impact on network congestions at Statnett? I will leave the answer for later, but if you’re curious to learn more, don’t hesitate to leave a comment.

Retrofitting the Transmission Grid with Low-cost Sensors

In Statnett, we collect large amounts of sensing data from our transmission grid. This includes both electric parameters such as power and current, and parameters more directly related to the individual components, such as temperatures, gas concentrations and so on.

Nevertheless, the state and behaviour of many of our assets are to a large extent not monitored and to some extent unobservable outside of regular maintenance and inspection rounds. We believe that more data can give us a better estimate of the health of each component, allowing for better utilization of the grid, more targeted maintenance, and reduced risk of component failure.

About a year ago, we therefore acquired a Pilot Kit from Disruptive Technologies, packed with a selection of miniature sensors that are simple to deploy and well-suited for retrofitting on existing infrastructure. We set up a small project in collaboration with Statnett R&D, where we set about testing the capabilities of this technology, and its potential value for Statnett.

Since then we’ve experimented with deploying these tiny IOT-enabled devices on a number of things, ranging from coffee machines to 420 kV power transformers.

To gauge the value and utility of these sensors in our transmission grid, we had to determine what to measure, how to measure, and how to gather and analyze the data.

This blog post summerizes what we’ve learnt so far, and evaluates some of the main use cases we’ve identified for instrumenting the transmission grid. The process is by no means finished, but we will describe the steps we have taken so far.

Small, Low-cost IoT-Sensors

Disruptive Technologies is a fairly young company whose main product is a range of low-cost, IoT-enabled sensors. Their lineup includes temperature sensors, touch sensors, proximity sensors, and more. In addition to sensors, you need one or more cloud connectors per area you are looking to instrument. The sensors transmit their signals to a cloud connector, which in turn streams the data to the cloud through mobile broadband or the local area network. Data from the sensors are encrypted, so a sensor can safely transmit through any nearby cloud connector without prior pairing or configuration.

The devices and the accompanying technology have some characteristics that make them interesting for power grid instrumentation, in particular for retrofitting sensors to already existing infrastructure:

  • Cost: The low price of sensors makes experimental or redundant installation a low-risk endeavour.
  • Size: Each sensor is about the size of a coin, including battery and the wireless communication layer.
  • Simplicity: Each sensor will automatically connect to any nearby cloud connector, so installation and configuration amounts to simply sticking the sensor onto the surface you want to monitor, and plugging the cloud connector into a power outlet.
  • Battery life: expected lifetime for the integrated battery is 15 years with 100 sensor readings per day.
  • Security: Data is transmitted with full end-to-end encryption from sensor to Disruptive’s cloud service.
  • Open API: Data are readily available for download or streaming to an analytics platform via a REST API.

Finding Stuff to Measure

Disruptive develops several sensor types, including temperature, proximity and touch. So far we have chosen to focus primarily on the temperature sensors, as this is the area where we see the most potential value for the transmission grid. We have considered use cases in asset health monitoring, where temperature is a well-established indicator of weaknesses or incipient failure. Depending on the component being monitored, unusual heat patterns may indicate poor electrical connections, improper arc quenching in switchgear, damaged bushings, and a number of other failure modes.

Asset management and thermography experts in Statnett helped us compile an initial list of components where we expect temperature measurements to be useful:

  • Transformers and transformer components. At higher voltage levels, transformers typically have built-in sensors for oil temperature, and modern transformers also tend to monitor winding and hotspot temperatures. Measurement on sub-components such as bushings and fans may however prove to be very valuable.
  • Ciruit breakers. Ageing GIS facilities are of particular importance both due their importance in the grid, and to the risk of environmental consequences in case of SF6 leakage. Other switchgear may also be of interest, since intermittent heat development during breaker operation will most likely not be uncovered by traditional thermography.
  • Disconnectors. Disconnectors (isolator switches) come in a number of flavors, and we often see heat development in joints and connection points. However, we know from thermography that hotspots are often very local, and it may be hard to predict in advance where on the disconnector the sensor should be placed.
  • Voltage and current transformers. Thermographic imaging has shown heat development in several of our instrument transformers. Continuous monitoring of temperature would enable us to better track this development and understand the relationship between power load, air temperature and transformer heating.
  • Capacitor banks. Thermography often reveals heat development at one or more capacitor in capacitor banks. However, it would require a very large number of sensors required to fully monitor all potential weak spots of a capacitor bank.

A typical use cases for the proximity sensors in the power system is open door or window alarms. Transmission level substations are typically equipped with alarms and video surveillance, but it might be relevant for other types of equipment in the field, or at lower voltage levels.

The touch sensors may for instance be used to confirm operator presence at regular inspection or maintenance intervals. Timestamping and georeferencing as part of an integrated inspection reporting application is a more likely approach for us, so we have not pursued this further.

Deployment on Transformer

Our first pilot deployment (not counting the coffee machine) was on three transformers and one reactor in a 420 kV substation. The sensors were deployed in winter when all components were energized, so we could only access the lower part of the main transformer and reactor bodies. This was acceptable, since the primary intention of the deployment was to gain experience with the process and hopefully avoid a few pitfalls in the future.

Moreover, the built-in temperature sensors in these components gave us a chance to compare readings from Disruptive sensors with the “true” inside temperature, giving us an impression of both the reliability of readings from Disruptive sensors and the ability to estimate oil temperature based on measurements on the outside of the transformer housing. We also experimented with different types of insulation covering the sensor, in order to gauge the effect of air temperature variations on sensor readings.

Deployment in Indoor Substation

Following the initial placement on the transformer bodies, we instrumented an indoor GIS facility, where we deployed sensors on both circuit breakers and disconnectors; plus one additional sensor to measure ambient temperature in the room. Since the facility is indoors and all energized components are fully insulated, this deployment was fairly straightforward. Our main challenge was that the cloud connector had a hard time connecting finding a cellular signal, but with a bit of fiddling we eventually found a few locations in the room with sufficient signal strength.

Concrete buildings can make it hard to find a good signal for mobile broadband. In this case we raised the cloud connector to the ceiling in search of a signal.

Deployment on Air Insulated Breakers

Finally, we took advantage of a planned disconnection of one of the busbars at a 300 kV facilty to instrument all poles of an outdoor SF6 circuit breaker. As mentioned above, disconnectors and instrument transformers were other instrument transformers. However, due to the layout of the substation, these were still energized so the circuit breakers were the only components we could gain access to.

Apart from monitoring the breakers, this deployment enabled us to test how the sensors reacted to being placed directly on uninsulated high-voltage equipment, and to check for any negative side-effects such as corona discharges.

Developing a Microservice for Data Ingestion

The sensors from Disruptive Technologies work by transmitting data from the sensor, via one or more cloud connectors, to Disruptive’s cloud software solution. The data are encrypted end-to-end, so the sensors may use any reachable cloud connector to transmit data.

As a precautionary measure, we opted to maintain an in-house mapping between sensor device ID and placement in the grid. This way, there is nothing outside Statnett’s systems to identify where the sensor data are measured.

Disruptive provides various REST APIs for streaming and downloading data from their cloud solution. For internal technical reasons, we chose to use a “pull” architecture, where we download new sensory readings every minute and pass them on to our internal data platform. We therefore developed a microservice that:

  1. Pulls data from Disruptive’s web service at regular intervals.
  2. Enriches the data with information about which component the sensor is placed on and how it is positioned.
  3. Produces each sensor reading as a message to our internal Kafka cluster.

From Kafka, the data are consumed and stored in a TimescaleDB database. Finally, we display and analyze the data using a combination of Grafana and custom-built dashboards.

The microservice runs on our internal Openshift Container Platform (PaaS).

We wrote a custom adapter that ingests data from Disruptive’s web services and produces messages on our internal Kafka cluster. From here, the data flows to Timescale and dashboards. The data are anonymous on Disruptive’s web service, so the adapter also adds contextual information to the messages.

The Value of Data

Do these newly acquired data help us take better decisions in the operation of the grid, and hence operate the grid in a smarter, safer, and more cost-effective way? This is really the litmus test for the value of retrofitting sensors and gathering more data about the components in the grid.

In this pilot project, we consulted field personnel and component experts regularly for advice on where and how to place sensors. However, it was the FRIDA project, a large cross-disciplinary digitalization project at Statnett, that really enabled and inspired the relevant switchgear expert to analyze the data we had collected in more detail.

Once he looked at the data, he discovered heat generation in one of the breakers, with temperatures that significantly exceeded what would be expected under normal operation. A thermographic imaging inspection was immediately ordered, which confirmed the readings from the Disruptive sensors.

The temperature sensors made us aware of high temperatures in one of the breakers, indicating a possible incipient failure of a critical component. As a result, the control centre changed the operating pattern on the substation and planned for mainentance of the unhealthy component. The figure shows the temperature on each phase of the breaker, with busbar A in the top panel and busbar B in the bottom one. The room temperature is shown as a dotted orange line.

Based on the available data, the breaker and thermography experts concluded that the breaker, altough apparently operating normally, shows signs of weakness and possibly incipient failure. This in turn lead to new parts being ordered and maintenance work planned for the near future.

While waiting for the necessary maintenance work to be performed, the operation of the substation has been adapted to reduce the stress on the weakened equipment. Until maintenance is performed, the limits for maximum amount of power flowing through the switchgear are now updated on a regular (and frequent) basis, based on the latest temperature readings from our sensors. Having access to live component state monitoring has also made the control centre able to make other changes to the operating pattern on the substation.

The control centre now continuously monitors the heat development in the substation, using the sensors from this pilot project. The new data has thus not only helped discovered an incipient failure in a critical component, it also allows us to keep operating the substation in a safe and controlled way while we are waiting for an opportunity to repair or replace the troublesome components.

Lessons Learned in the Field

Under optimal conditions, the wireless range, i.e. the maximum distance between sensors and cloud connectors, is 1000 meters with line of sight. Indoor, signals are reliably transmitted over ranges of 20+ meters in normal mode when the conditions are favorable. A weak signal will make the sensor transmit in “Boost mode”, and this quickly drains the battery.

High-voltage power transformer are huge oil-filled steel constructions, often surrounded by thick concrete blast walls. We quickly learned that these are not the best conditions for low-power wireless signal transfer. When attaching the sensors to the main body of the transformer, we observed that the communication distance was reduced to less than 10 meters and required line-of-sight between the sensors and the cloud connector.

One reason for the short transmission distance is that the metal body of the transformer absorbs most of the RF signal from the sensor. Disruptive therefore adviced us to use a bracket so that the sensors could be mounted at a 90 degree angle to the surface when mounting the sensors on large metal bodies. We used Lego blocks for this purpose. Disruptive have since developed a number of range extenders that are arguably better-looking.

Lego bracket vs. surface range extender. If you roll your own, keep in mind where on the sensor the actual temperature measurement is made, and rotate it accordingly. On the sensors in our Pilot Kit, this was the upper right corner of the device (not in the corner where the small orange dot is located).

Although we did experience an improvement in signal transmission range when mounting the sensors at an angle to the transformer body, sending signals around the corner of the transformer still turned out to be very challenging.

Disruptive have yet to develop a ruggedized cloud connector. The need to position the cloud connector such that line of sight was maintained, limited our ability to place the cloud connector inside existing shelters such as fuse cabinets. We therefore developed a weather-proof housing for outdoor use, so we could position the cloud connectors for optimal transmission conditions.

A weather-proof housing allowed us to position the cloud connectors for optimal transmission conditions.

All sensors have their device ID printed on the device itself. However, the text is tiny and the ID is long and complex. We opted to put a short label on each sensor using a magic marker in order to simplify the deployment process in the field. This simplistic approach was satisfactory for our pilot project, and required minimal support system development, but obviously does not scale to larger deployments.

As mentioned above, we have deployed a number of sensors directly on energized, unisolated, 300 kV components. We were quite curious to see how the sensors would cope with being mounted directly on high-voltage equipment. So far, the measurements from the sensors seem to be unaffected by the voltage. However, we have lost about 25 % of these sensors in less than a year due to high battery drainage. We suspect that this may be related to the environment in which they operate, but it may also be bad luck or related to the fact that our sensors are part of a pre-production pilot kit.

Finally, sticking coin-sized sensors onto metal surfaces is easy in the summer. It’s not always equally easy on rainy days or in winter, with cold fingers and icy or snow-covered components.

Other Things we Learned Along the Way

So far, our impression is the the data quality is good. The sensor readings are precise, and the sensors are mostly reliable. The snapshot from Grafana gives an impression of the data quality: as can be seen, there is very good correspondence between the temperature readings from the three different phases on busbar A after switching, when it has been disconnected.

However, both the cloud connectors and the SaaS-based architecture are weak spots where redundancy is limited. If a cloud connector fails, we risk loosing data from all the sensors communicating with that connector. This can to some extent be alleviated by using more cloud connectors. The SaaS architecture is more challenging: downtime on Disruptive’s servers sometimes affects the entire data flow from all their sensors.

Deployment is super easy, but an automated link to our ERP system would ease installation further, and significantly reduce the risk of human error when mapping sensors to assets.

The sensors are very small. This is cool, but if they were 10x larger with 10x stronger wireless signal, this would probably be better for many of our use cases.

Finally, communication can be challenging when dealing with large metal and concrete constructions. This goes for both the communication between sensor and cloud connector, and for the link between the cloud connector and the outside world. This is in most cases solvable, but may require some additional effort during installation.

Next Step: Monitor All Ageing GIS Facilities?

This pilot project has demonstrated that increased component monitoring can have a high value in the transmission grid.

Our main focus has been on temperature monitoring to assess component health, but the project has spurred quite a lot of enthusiasm in Statnett, and a number of other application areas have been suggested to us during the project.

One of the prime candidates for further rollout is to increase the monitoring of GIS substations, either by adding sensors to other parts of the facility, by selecting other substations for instrumentation, or both.

A further deployment of sensors must be aligned with other ongoing activities at Statnett, such as rollout of improved wireless communication at the substations. Nonetheless, we have learned that there are many valuable use cases for retrofitting sensors to the transmission grid, and we expect to take advantage of this kind of technology in the years to come.

%d bloggers like this: