String representations and distances in deep Convolutional Neural Networks for image classification - Université Jean-Monnet-Saint-Étienne Accéder directement au contenu
Article Dans Une Revue Pattern Recognition Année : 2016

String representations and distances in deep Convolutional Neural Networks for image classification

Cécile Barat
  • Fonction : Auteur
  • PersonId : 844844

Résumé

Recent advances in image classification mostly rely on the use of powerful local features combined with an adapted image representation. Although Convolutional Neural Network (CNN) features learned from ImageNet were shown to be generic and very efficient, they still lack of flexibility to take into account variations in the spatial layout of visual elements. In this paper, we investigate the use of structural representations on top of pre-trained CNN features to improve image classification. Images are represented as strings of CNN features. Similarities between such representations are computed using two new edit distance variants adapted to the image classification domain. Our algorithms have been implemented and tested on several challenging datasets, 15Scenes, Caltech101, Pas-cal VOC 2007 and MIT indoor. The results show that our idea of using structural string representations and distances clearly improves the classification performance over standard approaches based on CNN and SVM with linear kernel, as well as other recognized methods of the literature.
Fichier principal
Vignette du fichier
Barat2016-String-preprint.pdf (1.32 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

ujm-01274675 , version 1 (16-02-2016)

Identifiants

Citer

Cécile Barat, Christophe Ducottet. String representations and distances in deep Convolutional Neural Networks for image classification. Pattern Recognition, 2016, 54, pp.104-115. ⟨10.1016/j.patcog.2016.01.007⟩. ⟨ujm-01274675⟩
119 Consultations
1783 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More