An Axiomatic Analysis of Diversity Evaluation Metrics: Introducing the Rank-Biased Utility Metric

Posted by SIGIR Beijing Chapter on September 14, 2018

Title: An Axiomatic Analysis of Diversity Evaluation Metrics: Introducing the Rank-Biased Utility Metric

Speaker: Damiano Spina

Time: September 18th 2018 (15:00)

Venue: FIT 3-125

Abstract:

Many evaluation metrics have been defined to evaluate the effectiveness ad-hoc retrieval and search result diversification systems. However, it is often unclear which evaluation metric should be used to analyze the performance of retrieval systems given a specific task. Axiomatic analysis is an informative mechanism to understand the fundamentals of metrics and their suitability for particular scenarios. In this talk, I will present a constraint-based axiomatic framework to study the suitability of existing metrics in search result diversification scenarios. The analysis informed the definition of Rank-Biased Utility (RBU) -- an adaptation of the well-known Rank-Biased Precision metric -- that takes into account redundancy and the user effort associated to the inspection of documents in the ranking. Our experiments over standard diversity evaluation campaigns show that the proposed metric captures quality criteria reflected by different metrics, being suitable in the absence of knowledge about particular features of the scenario under study.

Bio:

Damiano Spina (RMIT University, Australia) is a Postdoctoral Research Fellow at RMIT University. His recent research interests include interactive information retrieval and evaluation. He has published a number of publications on the field of information retrieval. Dr. Spina is an editorial board member of IP&M, co-organiser of the Conversational Approaches to Information Retrieval (CAIR) workshop at SIGIR'18 and participated as program committee member for international conferences, including CIKM, SIGIR, WWW, CHIIR and LREC, among others.