Title Simplicial-Map Neural Networks Robust to Adversarial Examples
Authors PALUZO HIDALGO, EDUARDO, Gonzalez-Diaz, Rocio , Gutierrez-Naranjo, Miguel A. , Heras, Jonathan
External publication Si
Means Mathematics
Scope Article
Nature Científica
JCR Quartile 1
SJR Quartile 2
JCR Impact 2.592
SJR Impact 0.538
Publication date 01/01/2021
ISI 000611359600001
DOI 10.3390/math9020169
Abstract Broadly speaking, an adversarial example against a classification model occurs when a small perturbation on an input data point produces a change on the output label assigned by the model. Such adversarial examples represent a weakness for the safety of neural network applications, and many different solutions have been proposed for minimizing their effects. In this paper, we propose a new approach by means of a family of neural networks called simplicial-map neural networks constructed from an Algebraic Topology perspective. Our proposal is based on three main ideas. Firstly, given a classification problem, both the input dataset and its set of one-hot labels will be endowed with simplicial complex structures, and a simplicial map between such complexes will be defined. Secondly, a neural network characterizing the classification problem will be built from such a simplicial map. Finally, by considering barycentric subdivisions of the simplicial complexes, a decision boundary will be computed to make the neural network robust to adversarial attacks of a given size.
Keywords algebraic topology; neural network; adversarial examples
Universidad Loyola members