A well-designed condition monitoring program helps grid operators make better decisions — about maintenance, investment and risk. But getting there requires asking the right questions first. Here are the five W’s we always come back to.

1. Why do you want to monitor in the first place?

It sounds like an obvious question. In our experience, it’s the one that gets skipped most often. Organizations start with “what can we measure?” and end up with a lot of data and an unclear path to value.

The reasons to monitor are real and varied, but what matters is that they point in the same direction. Safety, reliability, environmental responsibility, regulatory compliance. These aren't separate arguments. They're the same argument looked at from different angles.

— Safety. Remote visibility means personnel don’t need to be on site when there’s an elevated risk. It also reduces routine travel to hard-to-reach locations.

— Reliability. Condition-based decisions replace calendar-based ones. You act when there’s a reason to, not because the schedule says so.

— Environmental responsibility. Continuous monitoring reduces the risk of oil leaks, SF6 releases and cooling failures — problems that are far cheaper to prevent than cleaning up.

— Regulatory compliance. For certain assets, documented monitoring records aren’t optional. Getting ahead of that requirement is easier than retrofitting it.

Knowing your “why” shapes everything that follows — which assets to prioritize, what to measure, and what good looks like when you get there.

2. Who uses this data — and who needs to be involved?

The data collected from a substation isn’t just useful to the maintenance team. It’s useful to operations, asset management, finance and anyone responsible for long-term infrastructure planning. The challenge is that each of these groups needs something slightly different from the same underlying information.

A maintenance technician on site needs to know what’s happening at this station, right now. An asset manager needs to understand how the fleet is ageing and where the next capital decision should land. A management team needs confidence that risks are understood and being managed. Same data, very different questions.

This is worth establishing early. Programs that are owned only by the maintenance department tend to stay there. The ones that deliver most value are those where several parts of the organization agreed on what they needed — before deployment, not after.

— Maintenance teams get earlier warnings and clearer priorities. Site visits become purposeful rather than precautionary.

— Operations groups gain situational awareness — the ability to respond to a developing situation rather than react to a failure.

— Asset managers get condition data to support replacement and investment decisions based on actual state, not assumed age.

— Standards and procurement teams can write better specifications for future equipment, informed by how current assets behave in the field.

The programs that deliver the most value are the ones where these groups aligned on objectives before deployment — not after.

3. What should you measure?

Almost anything can be measured. That’s not the useful answer. The useful question is: where does a fault cause the most damage?

Risk can be broken down into two parts — the likelihood of a fault occurring, and the consequence if it does. The combination of the two gives you a prioritization list. That’s where you start measuring, not from a catalogue of available sensors.

A few things worth knowing before you choose

— Not all components need the same level of attention. A small number of high-risk assets justify continuous, detailed monitoring. The majority can be managed with simpler and less frequent methods. Treating everything equally is rarely the right approach.

— Make sure you’re measuring the right thing. A sensor that doesn’t capture the actual failure mode you’re trying to detect creates false confidence. The measurement musttch the mechanism.

— Start with what you already have. Inspection records, maintenance history, fault logs — there’s often more usable information than organizations realize. That’s the foundation. Continuous data collection builds on top of it.

— Cost must be proportional to risk. Frequency, storage, communication and analysis all add up. A real-time monitoring solution on a low-risk component is rarely defensible, and should be weighed against what’s actually at stake.

You don’t need a big strategy or a big budget to get started. You need a clear question to answer.

Want to go deeper on this? We've written a more detailed guide: How do I know what to start measuring?

4. Where in your network does monitoring make the most sense?

Once you know what to measure, the next question is where to deploy first. For most organizations, the answer is less complicated than it seems. It’s usually where the combination of consequence and cost is hardest to ignore.

— Remote and hard-to-reach locations. Sites where a single visit takes half a day and requires a specialist. Continuous monitoring doesn’t supplement physical presence here — it replaces it for most purposes.

— Critical loads. Hospitals, government buildings, industrial facilities. Where an outage has immediate and serious consequences beyond operational inconvenience.

— Problem assets. Every network has a subset of equipment that consumes disproportionate attention. These are often the easiest starting points. The business case is self-evident and the value is visible quickly.

— Generation and connection points. As more renewables and distributed generation enter the grid, protecting the assets at these connection points becomes increasingly important.

Starting focused is not the same as thinking small. A well-chosen first deployment creates proof of value that makes the next one easier to justify.

5. When is the right time to start?

The honest answer is that there’s rarely a wrong time, because you don’t have to do everything at once. The approach that works is to start with a specific problem, demonstrate value, and build from there at a pace that fits your organization.

What makes this possible is that monitoring doesn’t require a clean slate. Existing substations can be connected without rebuilding them. Sensors you already have can be integrated into a shared platform. You start where you are, with what you have, and scale from there.

Natural moments to begin

— When specifying new equipment. The best time to build in monitoring capability is before a transformer is ordered. Standards teams that have done this find it far easier to expand coverage over time.

— When upgrading communications infrastructure. A communication upgrade is a natural opportunity to standardize data collection across a fleet at the same time. One project, two outcomes.

— When an asset reaches midlife. Adding monitoring at this point creates the data history you’ll need when the replacement decision comes — and often extends the asset’s life in the process.

— When something prompts the question. A near-miss, an unusual reading, a regulatory change. These moments create organizational appetite that didn’t exist before. They’re worth acting on.

Start where it makes sense. Connect your most critical assets. Scale when you’re ready.


Working through these questions and want to think about your next steps? We're happy to help. Contact us at info@gomero.com.