DSC/e Lecture Marko Robnik-Sikonja, Explaining individual predictions with perturbations of inputs
- 24 January
- 12:30 - 13:30
- Luna, Corona room (Koepelzaal)
Current research into algorithmic explanation methods for predictive models can be divided into two main approaches: gradient-based approaches limited to neural networks, and more general perturbation-based approaches which can be used with arbitrary prediction models. We present an overview of perturbation-based approaches, with focus on popular methods (EXPLAIN, IME, LIME). These methods support explanation of individual predictions but can also visualize the model as a whole. The EXPLAIN method perturbs one input at a time, IME uses game theory based sampling to control interactions between inputs, while LIME perturbs instances in the locality of explained instances. We describe their working principles, how they handle computational complexity, their visualizations as well as their advantages and disadvantages. We illustrate issues and challenges in applying the explanation methodology on practical use cases from medicine and B2B sales forecasting in a company.
Marko Robnik-Šikonja is Professor of Computer Science and Informatics and Head of Artificial Intelligence Chair at University of Ljubljana, Faculty of Computer and Information Science. His research interests span machine learning, data mining, knowledge discovery in databases, cognitive modelling, natural language processing and application of data mining techniques. His most notable scientific results concern feature evaluation, ensemble learning, network analysis, cost-sensitive learning, model and prediction explanation, generation of semi-artificial data, and natural language analysis. He is (co)author of more than 100 scientific publications and three open source R data mining packages.