15:00 (CET)
Leiden University
Empirical Analysis of Upper Bounds of Robustness Distributions using Adversarial Attacks
Abstract:
Applications for optimization with uncertain data in practice often feature a possibility to reduce the uncertainty at a given query cost, e.g., by conducting measurements, surveys, or paying a third party in advance to limit the deviations. To model this type of applications, decision-dependent uncertainty sets are used. Assuming that uncertain cost parameters lie in bounded, closed intervals, we introduce the concept of optimization problems under controllable uncertainty (OCU) where the optimizer can shrink each of these intervals around a certain value called hedging point, possibly reducing it to a single point. Depending on whether the hedging points are known in advance or not, when the narrowing down, the underlying optimization and the realization of remaining uncertainty take place, different models arise. In some of these settings an optimizer might query a parameter solely to reduce the uncertainty for other parameters. We call this phenomenon budget deflection. In the talk, we discuss two example problem settings - one with known and one with unknown hedging points - where we handle the remaining uncertainty by the paradigm of robust optimization. We give necessary conditions for budget deflection to appear in these settings.
Joint work with Rebecca Marx, Maximilian Merkert, Tim Niemann and Sebastian Stiller.
Abstract:
In recent years, despite the remarkable performance of neural networks in various tasks, their vulnerability to adversarial perturbations has led to concerns about the reliability and security of these models. Adversarial perturbations are small, imperceptible modifications to inputs that lead to misclassifications. This rising concern is reflected in the sizeable body of scientific work investigating different kinds of attacks training neural networks to be robust against perturbations and formal verification of different robustness properties. A prominent example of such a property is local robustness, which determines for a given network and input whether the network is robust against input perturbations up to a certain magnitude. In this talk Annelot will explan how these perturbations work, how to find them and how they can be used to provide insight into the robustness of neural neworks.
Bio:
Eva Ley is a doctoral researcher at the Institute for Mathematical Optimization at TU Braunschweig, Germany. She studied mathematics at University of Trier, University of Southampton and TU Munich. Her work focuses on bilevel and robust optimization that include discrete problems.
Bio:
Annelot is a 4th-year PhD student at Leiden University and currently a visiting researcher at Oxford University. Her research focuses on Neural Network Verification and methods for quantifying the robustness of neural networks, with a particular interest in formal verification techniques and large-scale empirical evaluation. Beyond her research, Annelot has contributed to academic workshops and seminars on trustworthy AI in the TAILOR project.
15:00 (CET)
15:00 (CET)
15:00 (CET)