1. Why Should I Trust You?” Explaining the Predictions of Any Classifier.
Local Interpretable Model-agnostic Explanations (LIME). The overall goal of LIME is to identify an interpretable model over the interpretable representation that is locally faithful to the classier.
https://homes.cs.washington.edu/~marcotcr/blog/lime/
3. Introduction to SIFT (Scale-Invariant Feature Transformation)