The design of linear Support Vector Machine (SVM) classification techniques is generally a Multi-objective Optimization Problem (MOP). These classification techniques require finding appropriate trade-offs between two objectives, such as the amount of misclassified training data (classification error) and the number of non-zero elements of the separator hyperplane. In this article, we review several linear SVM classification models in the form of multi-objective optimization. We put particular emphasis on applying sparse optimization (in terms of minimization of the number of non-zero elements of the separator hyperplane) to Feature Selection (FS) for multi-objective optimization linear SVM. Our primary purpose is to demonstrate the advantages of considering linear SVM classification techniques as MOPs. In multi-objective cases, we can obtain a set of Pareto optimal solutions instead of one optimal solution in single-objective cases. The results of these linear SVMs are reported on some classification datasets. The test problems are specifically designed to challenge the number of non-zero components of the normal vector of the separator hyperplane. We used these datasets for multi-objective and single-objective models.
Multi-Objective Models for Sparse Optimization in Linear Support Vector Machine Classification
Behzad Pirouz
;Behrouz Pirouz
2023-01-01
Abstract
The design of linear Support Vector Machine (SVM) classification techniques is generally a Multi-objective Optimization Problem (MOP). These classification techniques require finding appropriate trade-offs between two objectives, such as the amount of misclassified training data (classification error) and the number of non-zero elements of the separator hyperplane. In this article, we review several linear SVM classification models in the form of multi-objective optimization. We put particular emphasis on applying sparse optimization (in terms of minimization of the number of non-zero elements of the separator hyperplane) to Feature Selection (FS) for multi-objective optimization linear SVM. Our primary purpose is to demonstrate the advantages of considering linear SVM classification techniques as MOPs. In multi-objective cases, we can obtain a set of Pareto optimal solutions instead of one optimal solution in single-objective cases. The results of these linear SVMs are reported on some classification datasets. The test problems are specifically designed to challenge the number of non-zero components of the normal vector of the separator hyperplane. We used these datasets for multi-objective and single-objective models.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.