In smart and intelligent health care, smartphone sensor-based automatic recognition of human activities has evolved as an emerging field of research. In many application domains, deep learning (DL) strategies are more effective than conventional machine learning (ML) models, and human activity recognition (HAR) is no exception. In this paper, we propose a novel framework (CAEL-HAR), that combines CNN, Autoencoder and LSTM architectures for efficient smartphone-based HAR operation. There is a natural synergy between the modeling abilities of LSTMs, autoencoders, and CNNs. While AEs are used for dimensionality reduction and CNNs are the best at automating feature extraction, LSTMs excel at modeling time series. Taking advantage of their complementarity, the proposed methodology combines CNNs, AEs, and LSTMs into a single architecture. We evaluated the proposed architecture using the UCI, WISDM public benchmark datasets. The simulation and experimental results certify the merits of the proposed method and indicate that it outperforms computing time, F1-score, precision, accuracy, and recall in comparison to the current state-of-the-art methods.

A Novel Smartphone-Based Human Activity Recognition Approach using Convolutional Autoencoder Long Short-Term Memory Network

Thakur D.
Writing – Original Draft Preparation
;
Roy S.;
2023-01-01

Abstract

In smart and intelligent health care, smartphone sensor-based automatic recognition of human activities has evolved as an emerging field of research. In many application domains, deep learning (DL) strategies are more effective than conventional machine learning (ML) models, and human activity recognition (HAR) is no exception. In this paper, we propose a novel framework (CAEL-HAR), that combines CNN, Autoencoder and LSTM architectures for efficient smartphone-based HAR operation. There is a natural synergy between the modeling abilities of LSTMs, autoencoders, and CNNs. While AEs are used for dimensionality reduction and CNNs are the best at automating feature extraction, LSTMs excel at modeling time series. Taking advantage of their complementarity, the proposed methodology combines CNNs, AEs, and LSTMs into a single architecture. We evaluated the proposed architecture using the UCI, WISDM public benchmark datasets. The simulation and experimental results certify the merits of the proposed method and indicate that it outperforms computing time, F1-score, precision, accuracy, and recall in comparison to the current state-of-the-art methods.
2023
autoencoder
CNN
deep Learning
Human activity recognition (HAR)
LSTM
smartphone sensors
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/385841
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 16
social impact