Skip to Main content Skip to Navigation
Conference papers

A Neural Few-Shot Text Classification Reality Check

Abstract : Modern classification models tend to struggle when the amount of annotated data is scarce. To overcome this issue, several neural fewshot classification models have emerged, yielding significant progress over time, both in Computer Vision and Natural Language Processing. In the latter, such models used to rely on fixed word embeddings before the advent of transformers. Additionally, some models used in Computer Vision are yet to be tested in NLP applications. In this paper, we compare all these models, first adapting those made in the field of image processing to NLP, and second providing them access to transformers. We then test these models equipped with the same transformer-based encoder on the intent detection task, known for having a large number of classes. Our results reveal that while methods perform almost equally on the ARSC dataset, this is not the case for the Intent Detection task, where the most recent and supposedly best competitors perform worse than older and simpler ones (while all are given access to transformers). We also show that a simple baseline is surprisingly strong. All the new developed models, as well as the evaluation framework, are made publicly available 1 .
Document type :
Conference papers
Complete list of metadata

https://hal-ujm.archives-ouvertes.fr/ujm-03267869
Contributor : Christophe Gravier <>
Submitted on : Tuesday, June 22, 2021 - 5:00:24 PM
Last modification on : Tuesday, July 13, 2021 - 3:18:51 AM

File

2021.eacl-main.79.pdf
Publisher files allowed on an open archive

Identifiers

  • HAL Id : ujm-03267869, version 1

Collections

Citation

Thomas Dopierre, Christophe Gravier, Wilfried Logerais. A Neural Few-Shot Text Classification Reality Check. 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv (virtual), Ukraine. ⟨ujm-03267869⟩

Share

Metrics

Record views

4

Files downloads

6