The worsening climate crisis calls for immediate action to reduce the environmental impact of energy-intensive technologies, including Artificial Intelligence (AI). Reducing AI’s environmental footprint involves adopting energy-efficient strategies for training Deep Neural Networks (DNNs). One such strategy is Data Pruning (DP), which decreases the number of training instances, thereby lowering total energy consumption. Several DP methods, such as GraNd and Craig, have been introduced to accelerate model training. On the other hand, Active Learning (AL) techniques, originally designed to iteratively select relevant unlabeled data instances for being labeled by human experts, can also be leveraged to train models on smaller, but informative, subsets. However, despite reducing the volume of training data, many DP and AL-based methods involve expensive computations that may significantly limit their potential for energy savings. In this work“-in-progress”, we propose a framework, named DPET, that efficiently integrates data selection techniques within an AL-like incremental training. Empirical analyses on a benchmark dataset show that the proposed approach offers a better balance between accuracy and energy efficiency in the training of DNN models.
DPET: A Data and Parameter Efficient Training Framework for Green AI
Scala F.;Pontieri L.;Flesca S.
2025-01-01
Abstract
The worsening climate crisis calls for immediate action to reduce the environmental impact of energy-intensive technologies, including Artificial Intelligence (AI). Reducing AI’s environmental footprint involves adopting energy-efficient strategies for training Deep Neural Networks (DNNs). One such strategy is Data Pruning (DP), which decreases the number of training instances, thereby lowering total energy consumption. Several DP methods, such as GraNd and Craig, have been introduced to accelerate model training. On the other hand, Active Learning (AL) techniques, originally designed to iteratively select relevant unlabeled data instances for being labeled by human experts, can also be leveraged to train models on smaller, but informative, subsets. However, despite reducing the volume of training data, many DP and AL-based methods involve expensive computations that may significantly limit their potential for energy savings. In this work“-in-progress”, we propose a framework, named DPET, that efficiently integrates data selection techniques within an AL-like incremental training. Empirical analyses on a benchmark dataset show that the proposed approach offers a better balance between accuracy and energy efficiency in the training of DNN models.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


