Abstract
This tutorial presents explainability of retrieval methods, an emerging area focused on fostering responsible and trustworthy deployment of machine learning systems in the context of information retrieval. As the field has rapidly evolved in the past 4–5 years, numerous approaches have been proposed that focus on different access modes, stakeholders, and model development stages. This tutorial aims to introduce IR-centric notions, classification, and evaluation styles in explainable information retrieval (ExIR) while focusing on IR-specific tasks such as ranking, text classification, and learning-to-rank systems. We will extensively cover post-hoc methods, probing approaches, and recent advances in interpretability-by-design approaches. We will also discuss ExIR applications for different stakeholders, such as researchers, practitioners, and end-users, in contexts like web search, legal search, and high-stakes decision-making tasks. To facilitate practical understanding, we will provide a hands-on session on ExIR methods, reducing the entry barrier for students, researchers, and practitioners alike. Earlier version of this tutorial has been presented in SIGIR 2023 and FIRE 2023.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Anand, A., Lyu, L., Idahl, M., Wang, Y., Wallat, J., Zhang, Z.: Explainable information retrieval: a survey (2022)
Belinkov, Y.: Probing classifiers: promises, shortcomings, and advances. Comput. Linguistics 1, 207–219 (2022)
Bondarenko, A., Fröbe, M., Reimer, J.H., Stein, B., Völske, M., Hagen, M.: Axiomatic retrieval experimentation with ir_axioms. In: Proceedings of SIGIR 2022, pp. 3131–3140 (2022)
Choi, J., Jung, E., Lim, S., Rhee, W.: Finding inverse document frequency information in BERT. ArXiv preprint (2022)
Cohen, D., O’Connor, B., Croft, W.B.: Understanding the representational power of neural retrieval models using nlp tasks. In: Proceedings of 2018 ACM ICTIR, p. 67–74 (2018)
Danilevsky, M., Dhanorkar, S., Li, Y., Popa, L., Qian, K., Xu, A.: Explainability for natural language processing. In: Proc. of SIGKDD 2021, pp. 4033–4034 (2021)
Dietz, L., Bast, H., Chatterjee, S., Dalton, J., Meij, E., de Vries, A.: Neuro-symbolic approaches for information retrieval. In: Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, pp. 324–330 (2023)
Dietz, L., Bast, H., Chatterjee, S., Dalton, J., Nie, J.Y., Nogueira, R.: Neuro-symbolic representations for information retrieval. In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, pp. 3436–3439. Association for Computing Machinery, New York (2023)
Dunefsky, J., Chlenski, P., Nanda, N.: Transcoders find interpretable LLM feature circuits. In: The Thirty-eighth Annual Conference on Neural Information Processing Systems (2024). https://openreview.net/forum?id=J6zHcScAo0
Fan, Y., Guo, J., Ma, X., Zhang, R., Lan, Y., Cheng, X.: A linguistic study on relevance modeling in information retrieval, pp. 1053–1064 (2021)
Fernando, Z.T., Singh, J., Anand, A.: A study on the interpretability of neural retrieval models using deepshap. In: Proceedings of SIGIR 2019, pp. 1005–1008 (2019)
Formal, T., Piwowarski, B., Clinchant, S.: A white box analysis of ColBERT. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12657, pp. 257–263. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72240-1_23
Huang, X., Panwar, M., Goyal, N., Hahn, M.: Inversionview: a general-purpose method for reading information from neural activations. In: The Thirty-eighth Annual Conference on Neural Information Processing Systems (2024). https://openreview.net/forum?id=clDGHpx2la
Lei, T., Barzilay, R., Jaakkola, T.: Rationalizing neural predictions. In: Proceedings of EMNLP 2016, Austin, Texas, pp. 107–117 (2016)
Leonhardt, J., Rudra, K., Anand, A.: Extractive explanations for interpretable text ranking. ACM Trans. Inform. Syst. (2021)
Lucchese, C., Nardini, F.M., Orlando, S., Perego, R., Veneri, A.: ILMART: interpretable ranking with constrained lambdamart. In: Proceedings of SIGIR 2022, pp. 2255–2259 (2022)
Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: Proceedings of NIPS 2017, pp. 4765–4774 (2017)
Lyu, L., Anand, A.: Listwise explanations for ranking models using multiple explainers. In: Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, pp. 653–668. Springer (2023). https://doi.org/10.1007/978-3-031-28244-7_41
MacAvaney, S., Feldman, S., Goharian, N., Downey, D., Cohan, A.: ABNIRML: Analyzing the Behavior of Neural IR Models. ArXiv preprint (2020)
Polley, S.: Towards explainable search in legal text. In: Hagen, M., et al. (eds.) ECIR 2022. LNCS, vol. 13186, pp. 528–536. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99739-7_65
Polley, S., Janki, A., Thiel, M., Hoebel-Mueller, J., Nuernberger, A.: Exdocs: evidence based explainable document search. In: Proceedings of SIGIR Workshop on Causality in Search and Recommendation 2021 (2021)
Purpura, A., Buchner, K., Silvello, G., Susto, G.A.: Neural feature selection for learning to rank. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12657, pp. 342–349. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72240-1_34
Qiao, Y., Xiong, C., Liu, Z., Liu, Z.: Understanding the behaviors of bert in ranking. ArXiv preprint (2019)
Rahimi, R., Kim, Y., Zamani, H., Allan, J.: Explaining documents’ relevance to search queries. ArXiv preprint (2021)
Rau, D., Kamps, J.: The role of complex nlp in transformers for text ranking. In: Proceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2022, pp. 153–160. Association for Computing Machinery (2022)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?": explaining the predictions of any classifier. In: Proceedings of SIGKDD 2016, p. 1135–1144 (2016)
Roy, R.S., Anand, A.: Question answering for the curated web: tasks and methods in qa over knowledge bases and text collections. Synth. Lect. Synth. Lect. Inform. Concepts Retrieval Serv. 13(4), 1–194 (2021)
Saha, S., Majumdar, D., Mitra, M.: Explainability of text processing and retrieval methods: a critical survey (2022)
Sen, P., Ganguly, D., Verma, M., Jones, G.J.F.: The curious case of IR explainability: explaining document scores within and across ranking models. In: Proceedings of SIGIR 2020, pp. 2069–2072 (2020)
Singh, J., Anand, A.: EXS: explainable search using local model agnostic interpretability. In: Proceedings of WSDM 2019, pp. 770–773 (2019)
Singh, J., Anand, A.: Model agnostic interpretability of rankers via intent modelling. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 618–628 (2020)
Singh, J., Khosla, M., Zhenye, W., Anand, A.: Extracting per query valid explanations for blackbox learning-to-rank models. In: Proceedings of ICTIR 2021, pp. 203–210 (2021)
Sudhi, V., Bhat, S.R., Rudat, M., Teucher, R.: Rag-ex: a generic framework for explaining retrieval augmented generation. In: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024, pp. 2776–2780. Association for Computing Machinery, New York (2024). https://doi.org/10.1145/3626772.3657660
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of ICML 2017, pp. 3319–3328. Proceedings of Machine Learning Research (2017)
Verma, M., Ganguly, D.: LIRME: locally interpretable ranking model explanation. In: Proceedings of SIGIR 2019, pp. 1281–1284 (2019)
Völske, M., et al.: Towards axiomatic explanations for neural ranking models. In: Proceedings of ICTIR 2021, pp. 13–22 (2021)
Wallat, J., Beringer, F., Anand, A., Anand, A.: Probing bert for ranking abilities. In: Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland. pp. 255–273. Springer (2023). https://doi.org/10.1007/978-3-031-28238-6_17
Xu, R., Qi, Z., Guo, Z., Wang, C., Wang, H., Zhang, Y., Xu, W.: Knowledge conflicts for LLMs: A survey. In: Al-Onaizan, Y., Bansal, M., Chen, Y.N. (eds.) Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. pp. 8541–8565. Association for Computational Linguistics, Miami, Florida, USA (Nov 2024). https://doi.org/10.18653/v1/2024.emnlp-main.486, https://aclanthology.org/2024.emnlp-main.486/
Yu, P., Rahimi, R., Allan, J.: Towards explainable search results: A listwise explanation generator. In: Proceedings of SIGIR 2022, pp. 669–680 (2022)
Zhang, R., Guo, J., Fan, Y., Lan, Y., Cheng, X.: Query understanding via intent description generation. In: Proceedings of CIKM 2020, pp. 1823–1832 (2020)
Zhang, Y.: Tutorial on explainable recommendation and search. In: Proceedings of ICTIR 2019. pp. 255–256 (2019)
Zhang, Y., Mao, J., Ai, Q.: Sigir 2019 tutorial on explainable recommendation and search. In: Proc. of SIGIR 2019, pp. 1417–1418 (2019)
Zhang, Y., Mao, J., Ai, Q.: Www’19 tutorial on explainable recommendation and search. In: Proceedings of WWW 2019, pp. 1330–1331 (2019)
Zhang, Z., Rudra, K., Anand, A.: Explain and predict, and then predict again. In: WSDM 2021, Israel, 8-12 March 2021, pp. 418–426. ACM (2021)
Zhang, Z., Rudra, K., Anand, A.: Faxplainac: a fact-checking tool based on explainable models with human correction in the loop. In: Proceedings of the 30th ACM CIKM, pp. 4823–4827 (2021)
Zhang, Z., Setty, V., Anand, A.: Sparcassist: a model risk assessment assistant based on sparse generated counterfactuals. In: Proceedings of SIGIR, pp. 3219–3223 (2022)
Zhuang, H., et al.: Interpretable ranking with generalized additive models. In: Proceedings of WSDM 2021, pp. 499–507 (2021)
Acknowledgments
Sourav Saha is supported by TCS RSP PhD fellowship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Anand, A., Saha, S., Venktesh, V. (2025). Explainable Information Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_40
Download citation
DOI: https://doi.org/10.1007/978-3-031-88720-8_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-88719-2
Online ISBN: 978-3-031-88720-8
eBook Packages: Computer ScienceComputer Science (R0)