In supervised classification models, such as Support Vector Machine, the main purpose is to predict the class membership of the incoming samples. In some real applications malicious inputs are inserted to mislead a vulnerable classifier, leading to a wrong prediction. In our work we focus first on the problem of introducing the smallest perturbation of a sample to induce incorrect classification and then on how to produce a significant downgrading of the classifier acting on a subset of the input samples. The novelty of the proposed approach is in the attempt of calculating sparse perturbations by minimizing the relative ℓ0-pseudo-norm, which gives rise to a Difference of Convex (DC) optimization model. We present the results of some preliminary experiments.
DC Optimization in Adversarial Sparse Support Vector Machine
Astorino A.;Gorgone E.
;
2025-01-01
Abstract
In supervised classification models, such as Support Vector Machine, the main purpose is to predict the class membership of the incoming samples. In some real applications malicious inputs are inserted to mislead a vulnerable classifier, leading to a wrong prediction. In our work we focus first on the problem of introducing the smallest perturbation of a sample to induce incorrect classification and then on how to produce a significant downgrading of the classifier acting on a subset of the input samples. The novelty of the proposed approach is in the attempt of calculating sparse perturbations by minimizing the relative ℓ0-pseudo-norm, which gives rise to a Difference of Convex (DC) optimization model. We present the results of some preliminary experiments.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


