Predictive maintenance has become a strategic priority for Defence stakeholders. Behind the term lies a strong promise: improving operational availability, reducing unexpected failures and securing critical systems.
Radars, sonars, propulsion systems, embedded electronics, sensitive sensors. These assets operate in extreme environments and must deliver reliable performance over life cycles that often exceed twenty or thirty years.
In some programmes, proof of concepts multiply. Industrial deployments, however, remain rare.
In a previous article on improving the reliability of critical Defence equipment, we explored how artificial intelligence can strengthen system robustness throughout the entire life cycle. Yet despite this potential, one reality persists: most predictive maintenance initiatives in Defence do not progress beyond the proof of concept stage. Let us examine why.
An Attractive Promise, Yet Operationally Difficult to Deliver
On paper, the benefits of predictive maintenance are clear. It enables early detection of drift, reduces downtime, optimises maintenance plans and enhances system reliability across the full life cycle.
Within a POC, these promises often appear achievable. The scope is controlled, datasets are selected and the technical environment is stabilised. Performance results are frequently convincing.
However, scaling up often exposes a significant gap between technical demonstration and operational reality.
A Structurally Different Defence Environment
The Defence sector combines constraints rarely encountered elsewhere.
First, failure data is scarce. Equipment is designed for robustness and critical failures remain infrequent. Approaches based on large volumes of annotated failure history quickly reach their limits.
Second, technical architectures are constrained. Segmented systems, embedded environments, high sovereignty and cybersecurity requirements. A solution designed for an open cloud environment may prove extremely difficult to deploy within isolated infrastructures.
Finally, tolerance for error is exceptionally low. Excessive false positives erode trust, while a misinterpreted alert can lead to costly or high risk decisions.
These characteristics are not theoretical. They emerge immediately when moving from a controlled POC environment to real embedded systems. In such contexts, algorithmic performance alone is insufficient. Operational robustness becomes the priority.
Recurring Causes of POC-Level Blockages
1. Excessive Dependence on Historical Failure Data
Many projects rely on supervised approaches requiring large volumes of labelled fault data. In Defence, critical failures are rare and sometimes classified. Strictly supervised models, dependent on extensive annotated datasets, are therefore rarely suited to Defence environments.
The model performs well in the laboratory but lacks robustness once operational conditions evolve.
2. AI Designed for Data Scientists, Not for Maintenance Experts
In critical environments, an alert without clear explanation is difficult to act upon. Experts need to understand which signals contributed to the detection, how the drift is evolving and which elements should inform their diagnosis.
Without explainability and actionable guidance, the tool is perceived as a black box. It may impress during the pilot phase but struggles to gain acceptance in day to day operations.
3. Lack of Integration into MCO Processes
Predictive maintenance cannot operate in parallel with existing processes. It must be integrated into maintenance management systems, operational workflows and the contractual constraints of equipment manufacturers.
Without this integration, the project remains isolated and therefore fragile.
What Traditional Approaches Underestimate
The Defence sector demands a long term perspective. Systems evolve, are modernised, reconfigured and sometimes operated under conditions very different from those originally anticipated.
A static model, calibrated on a stable dataset, will degrade over time. The real challenge is not only initial performance but sustained adaptability.
Moreover, predictive maintenance is not limited to anomaly detection. The real objective is diagnosis. Detecting a deviation only creates value if it effectively guides maintenance action in a relevant and traceable manner.
It is precisely this transition from simple alerting to diagnostic support that determines whether a POC becomes an industrial capability.
From POC to Operational Capability
Moving beyond the POC stage requires a paradigm shift.
It is no longer about proving that an algorithm can detect an anomaly on a selected dataset. It is about designing a solution compatible with real world constraints: limited data, constrained architectures, explainability requirements and long asset life cycles.
What genuinely works is based on three structural conditions:
- Models capable of learning without massive volumes of historical failure data
- Deployment compatible with on premise environments and segmented architectures
- Maintenance oriented explainability, designed to guide field experts in decision making
Only under these conditions can predictive maintenance become a sustainable industrial capability rather than yet another experimental initiative.
Conclusion
If most predictive maintenance projects in Defence remain stuck at the POC stage, it is not because AI does not work.
The issue is not AI itself. The issue lies in applying methods designed for open industrial environments to constrained, closed and evolving Defence systems.
The real question is therefore not “Can we deliver a successful POC?” but rather “Is the solution designed to overcome the real constraints of the Defence sector?”