Research Portfolio

My research vision: AI Model Inspector for AI Maintenance -- Make the robustness inspection pipeline for AI models as reliable, standard, and easy, as car maintenance

Adversarial Machine Learning: Attack, Defense, and Robustness Evaluation & Verification

    • ZOO [AI-Sec'17]: powerful black-box attack to neural networks - nearly the same performance as white-box attacks

    • EAD [AAAI'18, two ICLR'18 Wksp, DSN'18 Wksp]: crafting L1 norm based adversarial examples - better attack transferability; weakened several defenses and adversary analysis

    • CLEVER [ICLR'18, GlobalSIP'18]: attack-agnostic network robustness measure - estimating certified attack lower bounds

    • Adversarial T-Shirt [ECCV'20]: physical adversarial examples evading person detector

    • Show-and-Fool [ACL'18]: adversarial examples for neural image captioning systems

    • Adversarial robustness v.s. classification accuracy tradeoff uncovered from 18 deep ImageNet models + attack transferability analysis between 306 pairs of these networks [ECCV'18]

    • Adversarial attack on sparse regression (feature identification) [GlobalSIP'18]

    • CROWN & CNN-Cert & PROVEN & Semantify [NeurIPS'18, AAAI'19, ICML'19, CVPR'20]: Formal (worst-case of probabilistic) and efficient robustness certification of neural networks with general activation functions, popular layer modules, and semantic perturbations

    • AutoZOOM & ZO-NGD & ZO-ADMM [AAAI'19, ICCV'19, AAAI'20]: query-efficient black-box attacking acceleration via dimensional reduction and advanced zeroth-order optimization techniques (soft-label & hard-label attacks)

    • Opt Attack & ZO-ADMM & Sign-OPT [ICLR'19, ICCV'19, ICLR'20]: Query-efficient zeroth-order optimization based black-box attack with limited information (decision-based model revealing only top-1 prediction label)

    • TD detection [ICLR'19]: Detecting adversarial audio inputs using temporal dependency

    • Structured adversarial attack [ICLR'19]: spatial structure guided adversarial attack and model interpretability

    • Paraphrasing Attack [SysML'19]: Joint text paraphrasing adversarial attacks at the word and sentence levels and adversarial training

    • First-order optimization based edge perturbation attack and defense (adversarial training) for graph neural networks [IJCAI'19, ICASSP'20]

    • HRS [IJCAI'19]: Hierarchical random switching to strengthen the robustness of a trained based model by increasing attacker's costs and improving robustness-accuracy tradeoff

    • Seq2Sick [AAAI'20]: Generating adversarial examples for sequence-to-sequence models (e.g., machine translation, summarization)

    • Certified robustness of neural network weight perturbation and its application to robust weight quantization [AAAI'20]

    • DBA [ICLR'20]: Distributed backdoor attack designed for federated learning; more effective and stealthy

    • Model sanitization with limited clean data [ICLR'20]: mitigate adversarial effect of a tampered (Trojan) model via mode connectivity

ZOO (black-box attack via direct model queries)

[AI-Sec'17] https://arxiv.org/abs/1708.03999

EAD (L1 distortion based white-box attack)

[AAAI'18] https://arxiv.org/abs/1709.04114 [ICLR'18 Wksp] https://arxiv.org/abs/1710.10733[ICLR'18 Wksp] https://arxiv.org/abs/1803.09638[DSN'18 Wksp] https://arxiv.org/abs/1805.00310

Show-and-Fool: adversarial examples for neural image captioning systems

[ACL'18] https://arxiv.org/abs/1712.02051

Physical Adversarial T-Shirt

[ECCV'20] https://arxiv.org/abs/1910.11099

Accuracy v.s. robustness tradeoff of 18 ImageNet models

AutoZOOM: query-efficient black-box adversarial attacking acceleration via dimensional reduction and zeroth-order optimization

Advanced zeroth order optimization = Query-efficient design of adversarial example generation process !

Robustness verification and evaluation for neural nets

Robustness certification for semantic perturbations

[CVPR'20] https://arxiv.org/abs/1912.09533

Adversarial attack on sparse regression

[GlobalSIP'18] https://arxiv.org/abs/1809.08706

HRS: Hierarchical random switching to strengthen the robustness of a trained based model

[IJCAI'19] https://arxiv.org/abs/1908.07116

Detecting adversarial audio inputs using temporal dependency

[ICLR'19] https://arxiv.org/abs/1809.10875

DBA attack exploits the distributed learning nature of federated learning to distribute a global trigger (Trojan) pattern over malicious agents

[ICLR'20] https://openreview.net/forum?id=rkgyS0VFvr

AI (Deep Learning) x [The Delta!]

AI x [Financial Applications]

A general framework of (deep) reinforcement learning for portfolio management with noisy and heterogeneous alternative data (e.g., stock prices + financial news)

[AAAI'20] https://arxiv.org/abs/2002.05780

AI x [Model IP Protection]

A general and practical framework for model watermark embedding and remote verification

[MLSyS'21]

Network Reprogramming: Model-Agnostic Transfer Learning

Reprogramming black-box machine learning systems

[ICML'20] https://arxiv.org/abs/2007.08714

Community Detection: Theory and Algorithms

    • Phase transition analysis of community detection under general connectivity models [T-SP, Phy. Rev. E]

    • AMOS & MIMOSA: theory-driven automated community detection algorithms for single-layer [T-SP] and multi-layer graphs [T-SIPN]

    • Deep (core) community detection [T-SP]

    • SGC-GEN: pseudo-supervised community detection meta algorithm [ICDM'17]

To be detectable, or not to be... Performance characterization of community detection

Communication detection in multi-layer networks

Event Propagation and Control in Networks

    • Modeling malware propagation in heterogeneous networks [Comm. Mag, Comm. Lett., J-IoT, T-CB, GLOBECOM'10]

    • Event propagation control via node and edge patching in communication networks [Comm. Mag.]

    • Identifying influential links on Twitter networks using network of networks model [T-SIPN]

Information propagation in heterogeneous networks

Malware propagation via multiple paths

Tweet propagation and user language fields

Network Analytics and Graph Data Mining

    • FINGER: Fast incremental computation of Von Neumann graph entropy [ICML'19]

    • Neural network based Bayesian personalized ranking for attributed network embedding [Data Science and Engineering]

    • Graph Attention Network using High-Order (beyond 1-hop neighborhood) information [IEEE Access]

    • GAN-based graph generator learned from a single graph [IEEE Access]

    • Bifurcation analysis of cell reprogramming [ICASSP'18, iScience]

    • Scalable end-to-end spectral clustering using random features [KDD'18]

    • Structural feature extraction from a single graph or a graph sequence [ICASSP'16]

    • Anomaly detection based on graph connectivity [https://arxiv.org/abs/1905.01002]

Network Resilience

    • LFVC: effective centrality measure based attack for network disruption [ICASSP'14, Comm. Mag.]

    • Sequential and game-theoretic information fusion for defending connectivity attacks [Phy. Rev. E, J-IoT]

Optimization for Machine Learning and Signal Processing

    • Mode connectivity in loss landscapes of deep learning [NeurIPS'20]

    • Survey on Zeroth Order optimization [IEEE Signal Processing Magazine]

    • Zeroth-order signSGD: faster convergence of zeroth order optimization [ICLR'19]

    • Non-convex zeroth order stochastic variance reduced algorithm [NeurIPS'18]

    • Accelerated distributed dual averaging over networked agents [T-SP]

    • Zeroth-order ADMM: convergence and algorithm [AISTATS'18]

(Last updated in Jan. 2021)