Near-lossless Binarization of Word Embeddings - Université Jean-Monnet-Saint-Étienne Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Near-lossless Binarization of Word Embeddings

Christophe Gravier
Amaury Habrard

Résumé

Word embeddings are commonly used as a starting point in many NLP models to achieve state-of-the-art performances. However, with a large vocabulary and many dimensions, these floating-point representations are expensive both in terms of memory and calculations which makes them unsuitable for use on low-resource devices. The method proposed in this paper transforms real-valued embeddings into binary embeddings while preserving semantic information, requiring only 128 or 256 bits for each vector. This leads to a small memory footprint and fast vector operations. The model is based on an autoencoder architecture, which also allows to reconstruct original vectors from the binary ones. Experimental results on semantic similarity, text classification and sentiment analysis tasks show that the binarization of word em-beddings only leads to a loss of ∼2% in accuracy while vector size is reduced by 97%. Furthermore, a top-k benchmark demonstrates that using these binary vectors is 30 times faster than using real-valued vectors.
Fichier principal
Vignette du fichier
1803.09065.pdf (334.11 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

ujm-02010043 , version 1 (06-02-2019)

Identifiants

  • HAL Id : ujm-02010043 , version 1

Citer

Julien Tissier, Christophe Gravier, Amaury Habrard. Near-lossless Binarization of Word Embeddings. 33rd AAAI Conference on Artificial Intelligence (AAAI-19), Jan 2019, Honolulu, HI, United States. ⟨ujm-02010043⟩
88 Consultations
127 Téléchargements

Partager

Gmail Facebook X LinkedIn More