Learning representational invariance instead of categorization
Standard
Learning representational invariance instead of categorization. / Hernandez-Garcia, Alex; König, Peter.
In: IEEE Xplore, 2019.Research output: SCORING: Contribution to journal › SCORING: Journal article › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Learning representational invariance instead of categorization
AU - Hernandez-Garcia, Alex
AU - König, Peter
PY - 2019
Y1 - 2019
N2 - The current most accurate models of image object cat-egorization are deep neural networks trained on large la-beled data sets. Minimizing a classification loss betweenthe predictions of the network and the true labels has provenan effective way to learn discriminative functions of the ob-ject classes. However, recent studies have suggested thatsuch models learn highly discriminative features that arenot aligned with visual perception and might be at the rootof adversarial vulnerability. Here, we propose to replacethe classification loss with the joint optimization of invari-ance to identity-preserving transformations of images (dataaugmentation invariance), and the invariance to objects ofthe same category (class-wise invariance). We hypothesizethat optimizing these invariance objectives might yield fea-tures more aligned with visual perception, more robust toadversarial perturbations, while still suitable for accurateobject categorization.
AB - The current most accurate models of image object cat-egorization are deep neural networks trained on large la-beled data sets. Minimizing a classification loss betweenthe predictions of the network and the true labels has provenan effective way to learn discriminative functions of the ob-ject classes. However, recent studies have suggested thatsuch models learn highly discriminative features that arenot aligned with visual perception and might be at the rootof adversarial vulnerability. Here, we propose to replacethe classification loss with the joint optimization of invari-ance to identity-preserving transformations of images (dataaugmentation invariance), and the invariance to objects ofthe same category (class-wise invariance). We hypothesizethat optimizing these invariance objectives might yield fea-tures more aligned with visual perception, more robust toadversarial perturbations, while still suitable for accurateobject categorization.
M3 - SCORING: Zeitschriftenaufsatz
ER -