A few weeks ago, I attended Energiforsk’s course on asset management and data collection for electricity grid companies. It was one of the more rewarding days I’ve had in a long time. Not because everything was new to me, but because it confirmed and nuanced something I’d long suspected: that the industry as a whole is still struggling with the basics, and that’s precisely where the most important work happens.

I wanted to share my reflections and try to put into words what I took away from it. 

Asset management is much more than maintenance

One of the first things that struck me was how broad the domain really is. The course had participants from across the spectrum: grid planning, operations and maintenance. Asset management according to ISO 55001 is about optimizing the management of a grid’s components throughout their entire lifecycle, from planning and construction to operations and maintenance. It’s not a silo — it’s a whole.

Yet asset management is often reduced to a question of maintenance. It isn’t. And that distinction matters when you start thinking about what data you actually need to collect, and why. 

The data foundation is missing — and it’s holding everything back

The most recurring theme of the day was data quality and interoperability — that is, the ability for data from different systems to be shared and used together. Many of the grid companies present still lack basic, structured and accessible maintenance and failure statistics. It might sound surprising, but it’s understandable when you consider how data actually disappears: paper binders and drawings discarded without being replaced digitally, IT systems replaced without migrating historical records, field staff recording observations in Excel or Word files that are difficult to systematize rather than structured databases.

The course illustrated this elegantly with a simple line of reasoning: to make data-driven decisions, you need analysis. To analyze, you need data. And for that, you actually need to collect it — consistently and systematically, with a clear idea of what it will be used for.

Perhaps the most striking example was failure statistics. The value of that kind of data grows exponentially over time, but only if it’s collected from the start and managed properly. Those who don’t begin now will find themselves without it in 20 years, just when they need it most. 

The industry is stuck in manual expert analysis

Another reflection I brought home is that there is a clear gap in how analytical work is actually carried out. There is a strong focus on manual expert analyses — skilled engineers doing deep dives on specific problems. That’s valuable, but it’s not scalable.

What’s missing are the simple, automated and continuous analyses. Routine event marking, continuous condition monitoring, automated anomaly reports. Not advanced AI — but the basic mechanisms that free up experts’ time for work that genuinely requires their expertise.

Paradoxically, it is precisely the automation of simple analyses that is the prerequisite for advanced ones becoming possible. You cannot jump straight to predictive maintenance without a functioning infrastructure for collecting and managing data in real time. 

Accessibility and classification — an unnecessary bottleneck?

Something that struck me during these days was how information classification sometimes creates an anxiety that actually holds back digitalization. Data that is essentially unproblematic ends up being handled with the same caution as sensitive operational information, which means it isn’t used at all.

It’s not about lowering security standards. It’s about classifying more clearly what is actually sensitive, so that data which can be freely shared and analyzed is also made genuinely accessible — without friction, for those who need it, when they need it, to create value for the organization. 

The new regulatory period opens new opportunities — but raises new demands

The practical conversation about business and procurement was also worth raising. The proposals ahead of the new regulatory period create room for a different kind of investment decision than before. But it also places demands on us as suppliers: we need to be able to package a clear business case for each application area, and get better at explaining why data collection infrastructure for maintenance is a prerequisite, not an add-on.

It’s easy to talk about predictive maintenance and condition-based decisions as if they were simple next steps. In practice, they require a solid data foundation that many organizations don’t yet have in place. That’s what we need to help customers understand — and build. 

What I’m taking with me

Sitting in a room with grid companies and hearing them describe their challenges openly and honestly was valuable in a way no report can replicate. The picture that emerged was not worrying — it was actually hopeful. The industry knows what it needs to do. People understand the connection between data, analysis and better decisions. What’s often missing is not the will, but the right support to take those first steps.

The first steps don’t need to be big. In practice, it’s about starting to map what data you actually already have and where it disappears in your processes today. The next step is to identify a concrete application area with a clear business case: what do we want to be able to decide, and what data does that require? From there, it’s possible to build a collection infrastructure that is easy to start small with, but that holds as needs grow.

That is precisely where Gomero wants to be. Not as a product vendor selling in solutions, but as a partner helping to lay that very foundation.