Learning representational invariance instead of categorization

Standard

Learning representational invariance instead of categorization. / Hernandez-Garcia, Alex; König, Peter.

in: IEEE Xplore, 2019.

Publikationen: SCORING: Beitrag in Fachzeitschrift/ZeitungSCORING: ZeitschriftenaufsatzForschungBegutachtung

Harvard

Hernandez-Garcia, A & König, P 2019, 'Learning representational invariance instead of categorization', IEEE Xplore.

APA

Hernandez-Garcia, A., & König, P. (2019). Learning representational invariance instead of categorization. IEEE Xplore.

Vancouver

Hernandez-Garcia A, König P. Learning representational invariance instead of categorization. IEEE Xplore. 2019.

Bibtex

@article{b64e656f05884547b500e4434c4495a0,
title = "Learning representational invariance instead of categorization",
abstract = "The current most accurate models of image object cat-egorization are deep neural networks trained on large la-beled data sets. Minimizing a classification loss betweenthe predictions of the network and the true labels has provenan effective way to learn discriminative functions of the ob-ject classes. However, recent studies have suggested thatsuch models learn highly discriminative features that arenot aligned with visual perception and might be at the rootof adversarial vulnerability. Here, we propose to replacethe classification loss with the joint optimization of invari-ance to identity-preserving transformations of images (dataaugmentation invariance), and the invariance to objects ofthe same category (class-wise invariance). We hypothesizethat optimizing these invariance objectives might yield fea-tures more aligned with visual perception, more robust toadversarial perturbations, while still suitable for accurateobject categorization.",
author = "Alex Hernandez-Garcia and Peter K{\"o}nig",
year = "2019",
language = "Deutsch",

}

RIS

TY - JOUR

T1 - Learning representational invariance instead of categorization

AU - Hernandez-Garcia, Alex

AU - König, Peter

PY - 2019

Y1 - 2019

N2 - The current most accurate models of image object cat-egorization are deep neural networks trained on large la-beled data sets. Minimizing a classification loss betweenthe predictions of the network and the true labels has provenan effective way to learn discriminative functions of the ob-ject classes. However, recent studies have suggested thatsuch models learn highly discriminative features that arenot aligned with visual perception and might be at the rootof adversarial vulnerability. Here, we propose to replacethe classification loss with the joint optimization of invari-ance to identity-preserving transformations of images (dataaugmentation invariance), and the invariance to objects ofthe same category (class-wise invariance). We hypothesizethat optimizing these invariance objectives might yield fea-tures more aligned with visual perception, more robust toadversarial perturbations, while still suitable for accurateobject categorization.

AB - The current most accurate models of image object cat-egorization are deep neural networks trained on large la-beled data sets. Minimizing a classification loss betweenthe predictions of the network and the true labels has provenan effective way to learn discriminative functions of the ob-ject classes. However, recent studies have suggested thatsuch models learn highly discriminative features that arenot aligned with visual perception and might be at the rootof adversarial vulnerability. Here, we propose to replacethe classification loss with the joint optimization of invari-ance to identity-preserving transformations of images (dataaugmentation invariance), and the invariance to objects ofthe same category (class-wise invariance). We hypothesizethat optimizing these invariance objectives might yield fea-tures more aligned with visual perception, more robust toadversarial perturbations, while still suitable for accurateobject categorization.

M3 - SCORING: Zeitschriftenaufsatz

ER -