Li, G., Hari, S. K. S., Sullivan, M., Tsai, T., Pattabiraman, K., Emer, J., & Keckler, S. W. (2017, November). Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (pp. 1-12).
Liu, Y., Wei, L., Luo, B., & Xu, Q. (2017, November). Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (pp. 131-138). IEEE.
Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction APIs. In 25th USENIX Security Symposium (USENIX Security 16) (pp. 601-618).
Z. Chen, G. Li and K. Pattabiraman, A Low-cost Fault Corrector for Deep Neural Networks through Range Restriction 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Taipei, Taiwan, pp. 1-13.
Madry, Aleksander Makelov, Aleksandar Schmidt, Ludwig Tsipras, Dimitris Vladu, Adrian. (2018). Towards Deep Learning Models Resistant to Adversarial Attacks. International Conference on Learning Representations (ICLR)
Yang, Puyudi Chen, Jianbo Hsieh, Cho-Jui Wang, Jane-Ling & Jordan, Michael. (2020). ML-LOO: Detecting Adversarial Examples with Feature Attribution. (AAAI) Conference on Artificial Intelligence.
E. Ozen and A. Orailoglu, "Sanity-Check: Boosting the Reliability of Safety-Critical Deep Neural Network Applications," 2019 IEEE 28th Asian Test Symposium (ATS), Kolkata, India, pp. 7-75.