Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted label from a model query to craft adversarial examples is distinguished as a decision-based attack.
In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirections from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttackis demonstrated to be more robust at achieving a lower distortion within a given query budget. We curate our extensive results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts on GitHub
Threat Model
Despite our best efforts, deep learning models remain highly vulnerable to even tiny adversarial perturbations applied to the inputs. The ability to extract information for solely the output of a machine learning model to craft adversarial perturbations to black-box models is a practical threat against real-world systems, such as autonomous cars or machine learning models exposed as a service (MLaaS). Of particular interest are sparse attacks. The realization of sparse attacks in black-box models demonstrates that machine learning models are more vulnerable than we believe. Because, these attacks aim to minimize the number of perturbed pixels—measured byl0norm—required to mislead a model by solely observing the decision (the predicted label) returned to a model query; the so-called decision-based attack setting. But, such an attack leads to an NP-hard optimization problem. We develop an evolution-based algorithm—SparseEvo—for the problem and evaluate against both convolutional deep neural networks and vision transformers. Notably, vision transformers are yet to be investigated under a decision-based at-tack setting. SparseEvo requires significantly fewer model queries than the state-of-the-art sparse attack Pointwise for both untargeted and targeted attacks. The attack algorithm, although conceptually simple, is also competitive with only a limited query budget against the state-of-the-art gradient-based Whitebox attacks in standard computer vision tasks such as ImageNet. Importantly, the query efficient SparseEvo, along with decision-based attacks, in general, raise new questions regarding the safety of deployed systems and poses new directions to study and understand the robustness of machine learning models.