Extensions of the SHAP method: using Shapley value to interpret models with dependencies and survival analysis models
SHAP (SHapley Additive exPlanations) is one of the most popular explanation methods. The widespread use is due to a solid theoretical basis from game theory, which gives the explanations reasonable foundations. These advantages also contribute to the further development of explanatory methods based on the concept of Shapley value.
The presentation will focus on the method’s noteworthy extensions to more specific areas. I will describe the explanation methods that use Shapley values while taking into account the dependencies and causality between features. I will also present a novel SHAP-inspired approach for explaining predictions of complex survival analysis models and share experience from the application of this method to case studies in the healthcare domain.
Bio
Mateusz Krzyziński is a Data Science student at the Faculty of Mathematics and Information Sciences of the Warsaw University of Technology. He is a member of MI² DataLab, where he is currently working on methods for explaining complex survival models. Moreover, he is scientifically involved in the application of machine learning in medical research. He is a co-author of one of the modules of the DALEX Python package, which received the John M. Chambers Statistical Software Award in 2022.
His general interests include explainable machine learning and its application in medicine, survival analysis, and graph theory.