Active Learning is a machine learning scenario in which methods are trained by iteratively submitting a query to a human expert and then taking into account his feedback for the following computations. The application of such paradigm to the anomaly detection task takes the name of Active Anomaly Detection (AAD). Reinforcement Learning describes a family of algorithms that aim to teach an agent to determine a policy to deal with external factors, and are based on the maximization of a reward function. Recently some AAD methods, based on the training of a meta-policy with Deep Reinforcement Learning have been very successful because, after the training, the methods simply work on a small number of meta-features that can be directly applied to any new dataset without further tuning. For these approaches a central question is the selection of good metafeatures: actually, the most common choice is to define these metafeatures in terms of the distances with the points that the expert has already labelled as either anomaly or normal. In this work we explore different strategies for selecting effective meta-features. Specifically, we take into account both direct and reverse nearest-neighbor rankings in order to build meta-features, since they are less sensitive to the specific distance distribution characterizing the training data, and experiment the combination also with related base detectors. The experiments show that there are scenarios in which our approach offers advantages over the standard technique.

Meta-feature Extraction Strategies for Active Anomaly Detection

Angiulli F.;Fassetti F.;Ferragina L.
;
Papaleo P.
2021-01-01

Abstract

Active Learning is a machine learning scenario in which methods are trained by iteratively submitting a query to a human expert and then taking into account his feedback for the following computations. The application of such paradigm to the anomaly detection task takes the name of Active Anomaly Detection (AAD). Reinforcement Learning describes a family of algorithms that aim to teach an agent to determine a policy to deal with external factors, and are based on the maximization of a reward function. Recently some AAD methods, based on the training of a meta-policy with Deep Reinforcement Learning have been very successful because, after the training, the methods simply work on a small number of meta-features that can be directly applied to any new dataset without further tuning. For these approaches a central question is the selection of good metafeatures: actually, the most common choice is to define these metafeatures in terms of the distances with the points that the expert has already labelled as either anomaly or normal. In this work we explore different strategies for selecting effective meta-features. Specifically, we take into account both direct and reverse nearest-neighbor rankings in order to build meta-features, since they are less sensitive to the specific distance distribution characterizing the training data, and experiment the combination also with related base detectors. The experiments show that there are scenarios in which our approach offers advantages over the standard technique.
2021
978-3-030-91607-7
978-3-030-91608-4
Anomaly Detection, Active Learning, Meta-feature Extraction, Reinforcement Learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/360982
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact