Skip to Main content Skip to Navigation
Conference papers

Predicting Query Difficulty in IR: Impact of Difficulty Definition

Josiane Mothe 1 Léa Laporte 2 Adrian-Gabriel Chifu 3
1 IRIT-SIG - Systèmes d’Informations Généralisées
IRIT - Institut de recherche en informatique de Toulouse
2 DRIM - Distribution, Recherche d'Information et Mobilité
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : While it exists information on about any topic on the web, we know from information retrieval (IR) evaluation programs that search systems fail to answer to some queries in an effective manner. System failure is associated to query difficulty in the IR literature. However, there is no clear definition of query difficulty. This paper investigates several ways of defining query difficulty and analyses the impact of these definitions on query difficulty prediction results. Our experiments show that the most stable definition across collections is a threshold-based definition of query difficulty classes.
Document type :
Conference papers
Complete list of metadatas

Cited literature [22 references]  Display  Hide  Download
Contributor : Open Archive Toulouse Archive Ouverte (oatao) <>
Submitted on : Friday, September 4, 2020 - 10:36:34 AM
Last modification on : Saturday, October 3, 2020 - 3:29:47 AM


Files produced by the author(s)



Josiane Mothe, Léa Laporte, Adrian-Gabriel Chifu. Predicting Query Difficulty in IR: Impact of Difficulty Definition. 11th International Conference on Knowledge and Systems Engineering (KSE 2019), Oct 2019, Da Nang, Vietnam. pp.0, ⟨10.1109/KSE.2019.8919433⟩. ⟨hal-02930107⟩



Record views


Files downloads