Predictive maintenance is often presented through the lens of rolling stock: monitoring a motor, a compressor, a door. The object is identifiable, localised, and sensor data ties back to a specific piece of equipment. But a railway network is not just trains. It rests on hundreds, even thousands, of kilometres of linear infrastructure: tracks, overhead lines, switches. And there, the rules of the game change fundamentally.
Linear infrastructure: a question of several hundred kilometres
When you monitor a door system, you work on a discrete piece of equipment. Each door has an identifier, a location within the vehicle, its own history. The “one piece of equipment = one model” logic is straightforward.
For infrastructure, the object monitored is not a discrete piece of equipment but a continuous line. Track geometry, that is, its conformity to the theoretical layout, degrades progressively under the effects of traffic, weather conditions and soil characteristics. This degradation is not uniform: some sections age faster than others, depending on curvature, load borne and ballast age.
Likewise, an overhead line, the contact wire that supplies electricity to trains, wears out through friction with the pantograph. Wear depends on track geometry, running speed and the mechanical tension of the wire. It progresses slowly, and when the contact wire reaches a critical wear threshold, intervention is required to prevent a rupture that would immobilise the entire line.
The challenge for an urban rail operator or a national infrastructure manager is the same: anticipating where and when degradation will reach a critical threshold, in order to plan works optimally. According to the European Union Agency for Railways, operators using predictive maintenance have observed a 37% reduction in track-related incidents.
From measurement trains to predictive models
The good news is that the data exists. Operators have measurement trains (or inspection trolleys) that regularly travel the network and record precise readings of the infrastructure’s condition. For track geometry, this includes longitudinal and transverse level, alignment, gauge and cant. For overhead lines, it covers contact wire wear, stagger and wire height.
These measurement campaigns generate georeferenced files: each reading is tied to a Kilometre Point (KP) that pinpoints exactly where on the network it was taken. By repeating these campaigns over time (monthly, quarterly), you obtain a spatialised time series: the evolution of each parameter, at each point of the network, over time.
Predictive maintenance applies to these georeferenced time series. But the technical challenges remain specific and considerable.
Automatic alignment: a key step in multi-dimensional processing
The first technical hurdle, and the most underestimated, is the spatial alignment of measurements. From one campaign to the next, the GPS positioning of the measurement train is not perfectly reproducible. A discrepancy of a few metres in localisation is enough to make direct comparison between two campaigns impossible. If you overlay the longitudinal level curves from January and March, the differences observed mix actual degradation with positioning error.
Automatic alignment of measurements is part of the multi-dimensional data processing layer. The algorithm aligns successive campaigns despite GPS localisation errors and Kilometre Point variations, enabling reliable campaign-to-campaign comparison. Without this alignment, the differences observed between two passes mix actual degradation with positioning noise, which degrades the quality of the predictive analysis.
Projecting degradation: anticipating where and when to act
Once measurement campaigns are properly aligned and synchronised in time, predictive analysis becomes possible. It rests on two complementary mechanisms.
The first is the calculation of the drift rate. For each Kilometre Point, the algorithm estimates the degradation slope of the monitored parameter (level, gauge, contact wire wear). This drift rate is not constant: it depends on local factors (curvature, traffic, support quality) and can suddenly accelerate if an event changes the mechanical conditions of the section. The algorithm detects these regime shifts and adjusts its projections accordingly.
The second mechanism is the ranking of sections by criticality level. By cross-referencing the current value of the parameter, its degradation rate and the regulatory or operational threshold not to be exceeded, the algorithm sorts Kilometre Points by order of urgency. The aim is not to produce a fixed intervention date, but to give teams a reliable hierarchy of zones to monitor as a priority, which becomes more accurate with each new measurement campaign.
This projection is calculated for every Kilometre Point on the network, producing a complete predictive map: a view of the network where each section is colour-coded by intervention urgency, from green (no intervention needed in the short term) to red (priority intervention).
Planning tamping and rail grinding at the right place, at the right time
The operational impact of this approach is direct for infrastructure teams.
Today, planning tamping works (which restores track geometry by compacting the ballast under the sleepers) and rail grinding (which corrects the rail profile) largely relies on fixed criteria: intervention happens when a measurement campaign reveals a threshold breach, or follows a predetermined schedule. This approach is reactive and suboptimal: it does not anticipate the zones that will exceed the threshold before the next measurement campaign, and it does not prioritise interventions according to degradation rate.
With predictive projection, the logic is reversed. The infrastructure team gets early visibility on the zones that will need intervention in the coming weeks or months. They can group interventions by geographic zone to optimise the work windows (which are scarce and costly on a busy network). They can distinguish sections where degradation is slow and stable (intervention plannable in the medium term) from those where degradation is accelerating (intervention to prioritise).
For overhead line monitoring, the logic is identical. Contact wire wear is projected over time for each section, allowing replacements to be planned before wear reaches the critical threshold, without prematurely replacing sections still in good condition.
A 100% software-based, agnostic solution
A point often overlooked in discussions about predictive maintenance for infrastructure: the dependence on hardware. Many solutions on the market are tied to a specific acquisition system, a proprietary sensor type, a closed data format. For an operator managing a heterogeneous network, this dependence is a major obstacle.
The approach we have developed with DiagFit is deliberately 100% software-based and agnostic to the acquisition system. The solution interfaces with georeferenced CSV files from any measurement train or inspection trolley. Deployment runs via Docker, in cloud or on-premise, and a feasibility test can be carried out in 15 days on the operator’s historical data. Moving from a pilot (a section of a few kilometres) to line-wide deployment requires no change in architecture.
This is a selection criterion all too often underestimated: the real scalability of a predictive solution, that is, its ability to move from proof of concept to production without rework, is what separates perpetual pilot projects from operational deployments.