Dict2vec : Learning Word Embeddings using Lexical Dictionaries - Université Jean-Monnet-Saint-Étienne Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Dict2vec : Learning Word Embeddings using Lexical Dictionaries

Christophe Gravier
Amaury Habrard

Résumé

Learning word embeddings on large unla-beled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.
Fichier principal
Vignette du fichier
emnlp2017.pdf (212.73 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

ujm-01613953 , version 1 (12-10-2017)

Identifiants

  • HAL Id : ujm-01613953 , version 1

Citer

Julien Tissier, Christophe Gravier, Amaury Habrard. Dict2vec : Learning Word Embeddings using Lexical Dictionaries. Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), Sep 2017, Copenhague, Denmark. pp.254-263. ⟨ujm-01613953⟩
622 Consultations
791 Téléchargements

Partager

Gmail Facebook X LinkedIn More