Industrial AI and the energy challenge
Artificial Intelligence (AI) is experiencing rapid growth in Industry, but this expansion is accompanied by a significant increase in energy consumption. Currently, data centers account for between 1% and 2% of global electricity consumption, a figure that could rise to 3% to 4% by 2030 (source: Deloitte). This increase is largely due to the computationally intensive requirements of machine learning models and large language models (LMMs).
In the industrial sector, this trend is all the more worrying given that companies will be using AI more and more. Complex models, often perceived as necessary, require massive computing resources, accentuating the environmental impact.
With planetary limits approaching, it is becoming unreasonable to continue with the trend towards ever more. These resources come at a cost that not only weakens corporate budgets, but also limits the adoption of AI. Faced with these challenges, industrial players need to rethink their approaches in order to reconcile performance and energy sobriety.
Frugal AI: an essential approach
AI frugality aims to optimize resources while minimizing energy footprint. This approach is based on several principles:
- Reducing power consumption by using more efficient systems and optimized algorithms to cut computing times.
- Reuse of existing models, relying on trained intelligences rather than creating new ones.
- More targeted data processing, avoiding the collection of too much unnecessary and energy-consuming information.
A key element of this frugality lies in the distinction between model training and model inference. Training requires billions of calculations and can consume up to 1,000 MWh, equivalent to the annual consumption of a small village. Inference, on the other hand – using a model that has already been trained – requires just a few kWh. This shows that the training phase consumes around 1,000 times more energy than inference.
DiagFit: a software to meet energy challenges
To meet these challenges, DiagFit, the software developed by Amiral Technologies, offers an efficient, energy-saving approach. The software helps to test different models quickly and easily, an important asset in industrial environments. For better understanding, it can explore and analyze data from sensors and industrial equipment. Thanks to its no-code interface for business experts, DiagFit enables seamless navigation through data and facilitates the extraction of interesting information. Its technology, entirely dedicated to industrial time series, has been the object of several years of R&D to provide accessible and intuitive advanced analysis functionalities, as well as frugal and rapidly executable anomaly detection models.
Reduced computing costs
DiagFit speeds up model execution, thus limiting resource use, reducing energy consumption and associated costs. This efficiency rests on two pillars: firstly, algorithmic optimization, which guarantees fast and accurate data processing, and secondly, the software’s intrinsic performance. DiagFit’s response times have been extensively optimized to enhance the user experience, while enabling efficient exploration of databases containing tens of millions of rows. This approach guarantees reliable large-scale analysis while minimizing the energy footprint and costs associated with industrial data processing.
Optimized data management
In industry, data is often limited for reasons of confidentiality, non-existent failure historical data, or long lead times to obtain (or generate). DiagFit takes advantage of this constraint by working on smaller but targeted datasets, making model training faster and less energy-intensive.
Based on simple models
Unlike resource-hungry deep learning models, DiagFit relies on models with problem-tailored characteristics and signal decomposition to process data more efficiently. This lighter approach reduces energy consumption while maintaining optimum performance.
Illustration of DiagFit results
Measuring carbon impact: concrete results
DiagFit’s energy consumption was evaluated using the CodeCarbon package, comparing several models according to criteria such as performance (MCC) and energy efficiency:
- Matthews Correlation Coefficient (MCC) is a metric that evaluates the performance of binary classification models.
- Calculation time and equivalent emission in CO₂ (eqCO₂).
Main results:
- DiagFit offers the best performance in MCC and AUC, with reduced computing time, thus reducing its carbon footprint.
- Les modèles de deep learning, bien que performants, affichent un temps de calcul plus élevé.
- The usual Machine Learning models (IF, OCSVM, LOF) are less efficient.
- Deep learning, while training-intensive, may be more CO₂-efficient in the inference phase.
Summary table of results based on MCC. In yellow, the best value for each column.
Computational performance: a key factor in industry
As an example, let’s take outlier detection, a pre-step performed upstream of the model to clean up the data. To evaluate the performance of our DiagFit solution, we compared it with other reference models on a random signal of increasing size. The results show that our detector offers significantly higher processing speed, which could be an advantage for certain industrial applications.
- The Admiral detector is 150 times faster than ARIMA for 1,000,000 samples.
- It is 700 times faster than the Stray model.
Prophet could not be run beyond 250,000 samples, making it unsuitable for industrial applications.
Would like to find ou more about the study?
Conclusion: DiagFit, a frugal AI software for the Industry
Thanks to optimized calculations, efficient management of confidential data and the use of simple models, DiagFit combines performance and energy frugality. In a world where AI is becoming ubiquitous, adopting a frugal approach is essential to ensure a more sustainable and responsible future.