I am a Postdoc in Machine Learning and Computer Vision at MultixLab, University of Amsterdam. Previously I did my PhD at Bosch Delta Lab and VIS Lab , University of Amsterdam under the supervision of Prof. Dr. Arnold Smeulders. Before starting my PhD I completed my masters in Electrical Engineering from KAIST, South Korea under the supervision of Prof. Dr. Jong-Hwan Kim at Robotics Intelligence Technology lab. I completed my bachelors in Electrical and Computer Engineering from COMSATS Institute of Information and Technology, Pakistan.


In order to improve transparency, interpretability, and trust in AI systems,  as well as to enable users to understand the reasoning behind the network's decisions I am working on providing explanations for deep neural networks in the context of videos. 

LinkEmailLinkedInTwitterLinkGitHub

Research Publications and Preprints

Sadaf Gulshad, Teng Long, Nanne van Noord. Hierarchical Explanations for Video Action Recognition, Preprint, 2023

Inspired by human cognition system, we leverage hierarchal information and propose HIerarchical Prototype Explainer (HIPE). HIPE enables a reasoning process for video action classification by dissecting the input video frames on multiple levels of the class hierarchy.

Project Page Paper Code 

Examples of various types of data imperfection : incorrect segmentation ,low contrast, and artifacts.

Ayetullah Mehdi Günes, Ward van Rooij, Sadaf Gulshad, Ben Slotman, Max Dahele, Wilko Verbakel, Impact of imperfection in medical imaging data on deep learning-based segmentation performance:An experimental study using synthesized data, Medical Physics Journal, 2023

This study investigates the influence of data imperfections on the performance of deep learning models for parotid gland segmentation. This was done in a controlled manner by using synthesized data. The insights this study provides may be used to make deep learning models better and more reliable.

Paper 

Sadaf Gulshad. Explainable Robustness for Visual Classification , Phd Thesis, 2022

In this thesis, we explore the explainable robustness of neural networks for visual classification. We  study an essential question for making neural networks deployable in real-world applications: “how to make neural networks explainably robust?”

Thesis (English) Chinese

Sadaf Gulshad, Ivan Sosnovik and Arnold Smeulders. Wiggling Weights to Improve the Robustness of Classifiers, Preprint 2021 

While many approaches for robustness train the network by providing augmented data to the network, we aim to integrate perturbations in the network architecture to achieve improved and more general robustness. 

Paper  Code 

Sadaf Gulshad, Ivan Sosnovik and Arnold Smeulders. Built-in Elastic Transformations for Improved Robustness. Preprint 2021

We present elastically-augmented convolutions (EAConv) by parameterizing filters as a combination of fixed elasticallyperturbed bases functions and trainable weights for the purpose of integrating unseen viewpoints in the CNN. 

Paper  Code 

Jeroen F Vranken, Rutger R van de Leur, Deepak K Gupta, Luis E Juarez Orozco, Rutger J Hassink, Pim van der Harst, Pieter A Doevendans, Sadaf Gulshad, René van Es Uncertainty estimation for deep learning-based automated analysis of 12-lead electrocardiograms, European Heart Journal-Digital Health.

 This study aims to systematically investigate uncertainty estimation techniques for automated classification of ECGs using DNNs and to gain insight into its utility through a clinical simulation.

Paper 

Sadaf Gulshad and Arnold Smeulders. Counterfactual Attribute-based Visual Explanations for Classification. International Journal of Multimedia Information Retrieval (IJMIR2021)

In this paper, our aim is to provide human understandable intuitive factual and counterfactual explanations for the decisions of neural networks.

Paper  Code Presentation 

Sadaf Gulshad and Arnold Smeulders. Explaining with Counter Visual Attributes and Examples. International Conference on Multimedia Retrieval (ICMR 2020), ACM

Different from previous work on interpreting decisions using saliency maps, text, or visual patches we propose to use attributes and counter-attributes, and examples and counter-examples as part of the visual explanations. When humans explain visual decisions they tend to do so by providing attributes and examples.

Paper  Code 

Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders, and Zeynep Akata. Interpreting adversarial examples with attributes. preprint 2019

We propose to enable black-box neural networks to justify their reasoning both for clean and for adversarial examples by leveraging attributes, i.e. visually discriminative properties of objects.

Paper Code 

Teaching and Graduate Student Supervision