Approximate Image Matching using Strings of Bag-of-Visual Words Representation - Archive ouverte HAL Access content directly
Conference Papers Year : 2014

Approximate Image Matching using Strings of Bag-of-Visual Words Representation

(1) , (1) , (1)
1

Abstract

The Spatial Pyramid Matching approach has become very popular to model images as sets of local bag-of words. The image comparison is then done region-by-region with an intersection kernel. Despite its success, this model presents some limitations: the grid partitioning is predefined and identical for all images and the matching is sensitive to intra- and inter-class variations. In this paper, we propose a novel approach based on approximate string matching to overcome these limitations and improve the results. First, we introduce a new image representation as strings of ordered bag-of-words. Second, we present a new edit distance specifically adapted to strings of histograms in the context of image comparison. This distance identifies local alignments between subregions and allows to remove sequences of similar subregions to better match two images. Experiments on 15 Scenes and Caltech 101 show that the proposed approach outperforms the classical spatial pyramid representation and most existing concurrent methods for classification presented in recent years.
Fichier principal
Vignette du fichier
Nguyen2014Approximate.pdf (298.2 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

ujm-01004415 , version 1 (11-06-2014)

Identifiers

Cite

Hong-Thinh Nguyen, Cécile Barat, Christophe Ducottet. Approximate Image Matching using Strings of Bag-of-Visual Words Representation. International Conference on Computer Vision Theory and Applications (VISAPP 2014), Jan 2014, Lisbon, Portugal. pp.345-353, ⟨10.5220/0004676803450353⟩. ⟨ujm-01004415⟩
150 View
551 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More