Publications

CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
[AAAI-23] AAAI Conference on Artificial Intelligence (acceptance rate 19.6%)
Huy Phan, Miao Yin, Yang Sui, Saman Zonouz, and Bo Yuan

Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks. However, the structured sparse models obtained by the existing works suffer severe performance degradation for both benign and robust accuracy. We propose CSTAR, an efficient solution that simultaneously impose Compactness, high STructuredness and high Adversarial Robustness on the target DNN models.

RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
[ECCV-22] European Conference on Computer Vision (acceptance rate 28%)
Huy Phan, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, and Bo Yuan

Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models. In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC). By performing systematic analysis and exploration on the important design knobs, we propose a framework that can learn the proper trigger patterns, model parameters and pruning masks in an efficient way. Thereby achieving high trigger stealthiness, high attack success rate and high model efficiency simultaneously.

Towards Yield Improvement for AI Accelerators: Analysis and Exploration
[ISVLSI-22] IEEE Computer Society Annual Symposium on VLSI
Mohammad Walid Charrwi, Huy Phan, Bo Yuan, Samah Mohamed Saeed

At the manufacturing stage of AI chips, some fabrication faults are very critical since they significantly affect the accuracy of the executed AI workload. In this paper, we analyze the ability to detect faults based on their functional criticality given the scan architecture that enables cost-effective manufacturing testing of systolic array-based AI accelerators. We consider different approaches that classify the faults of the AI chip as critical or benign. We propose and analyze two detection schemes, which minimize the probability of false alarms.

Audio-domain Position-independent Backdoor Attack via Unnoticeable Triggers
[MobiCom-22] Annual International Conference on Mobile Computing and Networking (acceptance rate 18%)
C
ong Shi, Tianfang Zhang, Zhuohang Li, Huy Phan, Tianming Zhao, Yan Wang, Jian Liu, Bo Yuan, and Yingying Chen

Deep learning models have become key enablers of voice user interfaces. We investigate an effective yet stealthy training-phase (backdoor) attack in the audio domain, where hidden/unnoticeable trigger patterns are injected through training set poisoning and overwrite the models' predictions in the inference phase. An attacker can play an unnoticeable audio trigger (e.g., bird chirps, foot steps) into live speech of a victim to launch the attack.

Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks
[ICASSP-22] IEEE International Conference on Acoustics, Speech and Signal Processing
Huy Phan, Yi Xie, Jian Liu, Yingying Chen, Bo Yuan

Compressed deep neural network (DNN) models have been widely deployed in many resource-constrained platforms and devices. However, the security issue of the compressed models, especially their vulnerability against backdoor attacks, is not well explored yet. In this paper, we study the feasibility of practical backdoor attacks for the compressed DNNs. More specifically, we propose a universal adversarial perturbation (UAP)-based approach to achieve both high attack stealthiness and high attack efficiency simultaneously.

Visual Privacy Protection in Mobile Image Recognition Using protective Perturbation
[MMSys-22] Proceedings of the 13th ACM Multimedia Systems Conference
Mengmei Ye, Zhongze Tang, Huy Phan, Yi Xie, Bo Yuan, Sheng Wei

Deep neural networks (DNNs) have been widely adopted in mobile image recognition applications. Considering intellectual property and computation resources, the image recognition model is often deployed at the service provider end, which takes input images from the user's mobile device and accomplishes the recognition task. However, from the user's perspective, the input images could contain sensitive information that is subject to visual privacy concerns, and the user must protect the privacy while offloading them to the service provider. To address the visual privacy issue, we develop a protective perturbation generator at the user end, which adds perturbations to the input images to prevent privacy leakage.

BATUDE: Budget-aware Neural Network Compression Based on Tucker Decomposition
[AAAI-22] Proceedings of the AAAI Conference on Artificial Intelligence (acceptance rate 15%)
Miao Yin, Huy Phan, Xiao Zang, Siyu Liao, Bo Yuan

Model compression is very important for the efficient deployment of deep neural network (DNN) models on resource constrained devices. Among various model compression approaches, high-order tensor decomposition is particularly attractive and useful because the decomposed model is very small and fully structured. In this paper, we propose BATUDE, a Budget-Aware TUcker DEcomposition-based compression approach that can efficiently calculate optimal tensor ranks via one-shot training.

CHIP: CHannel Independence-based Pruning for Compact Neural Networks
[NeurIPS-21] Advances in Neural Information Processing Systems (acceptance rate 26%)
Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Aliari Zonouz, Bo Yuan

Filter pruning has been widely used for neural network compression because of its enabled practical acceleration. To date, most of the existing filter pruning works explore the importance of filters via using intra-channel information. In this paper, starting from an inter-channel perspective, we propose to perform efficient filter pruning using Channel Independence, a metric that measures the correlations among different feature maps.

VVSec: Securing Volumetric Video Streaming via Benign Use of Adversarial Perturbation
[MM-20] ACM International Conference on Multimedia
Zhongze Tang, Xianglong Feng, Yi Xie, Huy Phan, Tian Guo, Bo Yuan, Sheng Wei

Volumetric video (VV) streaming has drawn an increasing amount of interests recently with the rapid advancements in consumer VR/AR devices and the relevant multimedia and graphics research. We for the first time identify an effective threat model that extracts 3D face models from volumetric videos and compromises face ID-based authentications. To defend against such attack, we develop a novel volumetric video security mechanism, namely VVSec, which makes benign use of adversarial perturbations to obfuscate the security and privacy-sensitive 3D face models.

CAG: A Real-Time Low-Cost Enhanced-Robustness High-Transferability Content-Aware Adversarial Attack Generator
[AAAI-20] AAAI Conference on Artificial Intelligence (acceptance rate 20.6%)
Huy Phan, Yi Xie, Siyu Liao, Jie Chen, Bo Yuan

Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in many artificial intelligence fields. Adversarial attack is a method that causes the intended misclassfication by adding imperceptible perturbations to legitimate inputs. However, from the perspective of practical deployment, these methods suffer from several drawbacks such as long attack generating time, high memory cost, insufficient robustness and low transferability. To address the drawbacks, we propose a Content-aware Adversarial Attack Generator (CAG) to achieve real-time, low-cost, enhanced-robustness and high-transferability adversarial attack