: The development of activity recognition based on multi-modal data makes it possible to reduce human intervention in the process of monitoring. This paper proposes an efficient and cost-effective multi-modal sensing framework for activity monitoring, it can automatically identify human activities based on multi-modal data, and provide help to patients with moderate disabilities. The multi-modal sensing framework for activity monitoring relies on parallel processing of videos and inertial data. A new supervised adaptive multi-modal fusion method (AMFM) is used to process multi-modal human activity data. Spatio-temporal graph convolution network with adaptive loss function (ALSTGCN) is proposed to extract skeleton sequence features, and long short-term memory fully convolutional network (LSTM-FCN) module with adaptive loss function is adapted to extract inertial data features. An adaptive learning method is proposed at the decision level to learn the contribution of the two modalities to the classification results. The effectiveness of the algorithm is demonstrated on two public multi-modal datasets (UTD-MHAD and C-MHAD) and a new multi-modal dataset H-MHAD collected from our laboratory. The results show that the performance of the AMFM approach on three datasets is better than the performance of the video or the inertial-based single-modality model. The class-balanced cross-entropy loss function further improves the model performance based on the H-MHAD dataset. The accuracy of action recognition is 91.18%, and the recall rate of falling activity is 100%. The results illustrate that using multiple heterogeneous sensors to realize automatic process monitoring is a feasible alternative to the manual response.

Adaptive Multimodal Fusion Framework for Activity Monitoring of People with Mobility Disability

Fortino, Giancarlo;Gravina, Raffaele
2022-01-01

Abstract

: The development of activity recognition based on multi-modal data makes it possible to reduce human intervention in the process of monitoring. This paper proposes an efficient and cost-effective multi-modal sensing framework for activity monitoring, it can automatically identify human activities based on multi-modal data, and provide help to patients with moderate disabilities. The multi-modal sensing framework for activity monitoring relies on parallel processing of videos and inertial data. A new supervised adaptive multi-modal fusion method (AMFM) is used to process multi-modal human activity data. Spatio-temporal graph convolution network with adaptive loss function (ALSTGCN) is proposed to extract skeleton sequence features, and long short-term memory fully convolutional network (LSTM-FCN) module with adaptive loss function is adapted to extract inertial data features. An adaptive learning method is proposed at the decision level to learn the contribution of the two modalities to the classification results. The effectiveness of the algorithm is demonstrated on two public multi-modal datasets (UTD-MHAD and C-MHAD) and a new multi-modal dataset H-MHAD collected from our laboratory. The results show that the performance of the AMFM approach on three datasets is better than the performance of the video or the inertial-based single-modality model. The class-balanced cross-entropy loss function further improves the model performance based on the H-MHAD dataset. The accuracy of action recognition is 91.18%, and the recall rate of falling activity is 100%. The results illustrate that using multiple heterogeneous sensors to realize automatic process monitoring is a feasible alternative to the manual response.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/332208
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact