String representations and distances in deep Convolutional Neural Networks for image classification - Université Jean-Monnet-Saint-Étienne Access content directly
Journal Articles Pattern Recognition Year : 2016

String representations and distances in deep Convolutional Neural Networks for image classification

Abstract

Recent advances in image classification mostly rely on the use of powerful local features combined with an adapted image representation. Although Convolutional Neural Network (CNN) features learned from ImageNet were shown to be generic and very efficient, they still lack of flexibility to take into account variations in the spatial layout of visual elements. In this paper, we investigate the use of structural representations on top of pre-trained CNN features to improve image classification. Images are represented as strings of CNN features. Similarities between such representations are computed using two new edit distance variants adapted to the image classification domain. Our algorithms have been implemented and tested on several challenging datasets, 15Scenes, Caltech101, Pas-cal VOC 2007 and MIT indoor. The results show that our idea of using structural string representations and distances clearly improves the classification performance over standard approaches based on CNN and SVM with linear kernel, as well as other recognized methods of the literature.
Fichier principal
Vignette du fichier
Barat2016-String-preprint.pdf (1.32 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

ujm-01274675 , version 1 (16-02-2016)

Identifiers

Cite

Cécile Barat, Christophe Ducottet. String representations and distances in deep Convolutional Neural Networks for image classification. Pattern Recognition, 2016, 54, pp.104-115. ⟨10.1016/j.patcog.2016.01.007⟩. ⟨ujm-01274675⟩
117 View
1711 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More