“Post-hoc prototype explanations for classification models”
Explainable AI traditionally forces a choice between flexible but vague post-hoc methods (like saliency maps) and intuitive but rigid prototype-based architectures. This talk introduces a framework that bridges this gap: post-hoc prototype explanations. We demonstrate how to extract intuitive, example-based explanations from pre-trained networks without architectural modifications or retraining. We will explore this approach across two modalities: 1. EPIC (Vision), Accepted as an Oral at AAAI 2026: Explaining pre-trained image classifiers across benchmarks from CUB-200 to ImageNet. 2.APEX (Audio), In Review at Interspeech 2026: Adapting prototypes for the acoustic domain by disentangling signals into precise time and frequency perspectives. Together, these methods provide a unified, plug-and-play toolkit for highly interpretable visual and audio classification.
Bio
Piotr Borycki is currently a PhD student at Jagiellonian University. He is also a member of the Computer Graphics team led by prof. Przemysław Spurek at Ideas Research Institute. His research interests focus on Neural Rendering, where he mostly works on the control and editability of Gaussian Splatting scene representation, and interpretability of deep learning models. Previously he was a visiting postgraduate student at Cyber-Human Lab at Cambridge University.
