Plenary talks and tutorials

The details about the plenary speakers and talk can be found below, ordered as in the schedule.

Audra McMillan (Apple): "Beyond Worst Case: Designing Instance Optimal Differentially Private Algorithms"

Abstract: Estimating the density of a distribution over ℝ from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is a more meaningful error metric for density estimation than other statistical measures like the total variation distance. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this talk we’ll discuss differentially private density estimation in the Wasserstein distance. Rather than aiming for minimax optimality, we’ll discuss a more refined notion of optimality we’ll refer to “instance optimality”. In contrast to worst case analysis, this viewpoint allows us to distinguish algorithms that adapt to easy instances of the problem from algorithms that do not.

(slides)

Bio: Audra is a research scientist at Apple working on privacy preserving machine learning and statistics. Her research interests focus on removing roadblocks to practical, widespread adoption of differentially private data analysis. Prior to joining Apple, Audra received her PhD from the Department of Mathematics at the University of Michigan and was a joint postdoc in the Department of Computer Science at Boston University and the Institute for Privacy and Security at Northeastern University. Audra has served on a variety of PCs including SatML, ICALP, COLT, FAccT, and FORC, she is a guest editor for the Journal of Privacy and Confidentiality and is a member of the steering committee of the Workshop on the Theory and Practice of Differential Privacy.

Olya Ohrimenko (University of Melbourne): "Privacy and Security Challenges in Machine Learning"

Abstract: Machine learning on personal and sensitive data raises privacy concerns and creates potential for inadvertent information leakage (e.g., extraction of one’s text messages or images from generative models). However, incorporating analysis of such data in decision making can benefit individuals and society at large (e.g., in healthcare and transportation). In order to strike a balance between these two conflicting objectives, one has to ensure that data analysis with strong confidentiality guarantees is deployed and securely implemented. 

My talk will discuss challenges and opportunities in achieving this goal. I will first describe attacks against not only machine learning algorithms but also naïve implementations of algorithms with rigorous theoretical guarantees such as differential privacy. I will then discuss approaches to mitigate some of these attack vectors, including property-preserving data analysis. To this end, I will give an overview of our work on ensuring confidentiality of dataset properties that goes beyond traditional record-level privacy (e.g., focusing on protection of subpopulation information as compared to that of a single person). 

Bio: Olya Ohrimenko is an Associate Professor at The University of Melbourne which she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include privacy and integrity of machine learning algorithms, data analysis tools and cloud computing, including topics such as differential privacy, dataset confidentiality, verifiable and data-oblivious computation, trusted execution environments, side-channel attacks and mitigations. Recently Olya has worked with the Australian Bureau of Statistics, National Bank Australia and Microsoft. She has received solo and joint research grants from Facebook and Oracle and is currently a PI on an AUSMURI grant. She was a finalist in AI in Cyber Security category of Women in AI Asia-Pacific Awards in 2023. See https://oohrimenko.github.io for more information. 

Ming Ding (CSIRO/Data61): "Fundamentals of Privacy-Preserving Federated Learning"

Abstract: Federated learning (FL) is gaining popularity as a decentralized machine learning method. It safeguards client data from direct exposure to external threats. However, attackers can still steal information from shared FL models. To address this, we've created a privacy-preserving FL framework using differential privacy (DP). Additionally, we establish a convergence upper-bound for the proposed DP-FL framework, revealing the existence of an optimal number of communication rounds for best convergence with privacy protection.

(slides)


Bio: Ming Ding (IEEE M’12-SM’17) received the B.S. (with first-class Hons.) and M.S. degrees in electronics engineering from Shanghai Jiao Tong University (SJTU), Shanghai, China, and the Doctor of Philosophy (Ph.D.) degree in signal and information processing from SJTU, in 2004, 2007, and 2011, respectively. From April 2007 to September 2014, he worked at Sharp Laboratories of China in Shanghai, China as a Researcher/Senior Researcher/Principal Researcher. Currently, he is a Principal Research Scientist at Data61, CSIRO, in Sydney, NSW, Australia. His research interests include data privacy and security, machine learning and AI, and information technology. He has co-authored more than 200 papers in IEEE journals and conferences, all in recognized venues, and around 20 3GPP standardization contributions, as well as two books, i.e., “Multi-point Cooperative Communication Systems: Theory and Applications” (Springer, 2013) and “Fundamentals of Ultra-Dense Wireless Networks” (Cambridge University Press, 2022). Also, he holds 21 US patents and has co-invented another 100+ patents on 4G/5G technologies. Currently, he is an editor of IEEE Communications Surveys and Tutorials. Besides, he has served as a guest editor/co-chair/co-tutor/TPC member for multiple IEEE top-tier journals/conferences and received several awards for his research work and professional services, including the prestigious IEEE Signal Processing Society Best Paper Award in 2022. 

Annabelle McIver (Macquarie University): "Towards Authentic Privacy Verification"

Abstract: In the past decade or so we have come to understand the vulnerabilities surrounding data sharing, and also the roles they play in privacy breaches. Underpinning the cause of privacy breaches is the idea that intended-to-be-kept-private information can leak into the public domain when other information is shared, and that adversaries can take advantage of these leaks to attack data sharing.

In consequence there are now a number of ways to protect data sharing through privacy mechanisms, some of which are “provable” in the sense that they can be shown to implement a given privacy definition. However a number of important fundamental questions remain: how exactly should a privacy solution be deployed, and how do we prove that it is correctly defending against the known attack threats?

In this talk I will show how the theory of quantitative information flow can begin to help answer these questions, and how an “information leakage logic” can provide a way to formally verify implemented privacy defences.

Bio: Annabelle McIver is a professor of Computer Science at Macquarie University in Sydney. Annabelle trained as a mathematician at Cambridge and Oxford Universities. Her research uses mathematics to prove quantitative properties of programs, and more recently to provide foundations for quantitative information flow for analysing security properties. She is co-author of the book "Abstraction, Refinement and Proof for Probabilistic Systems", and "The Science of Quantitative Information Flow".


Joseph Chien (Australian Bureau of Statistics): "Striking the Balance: Exploring Differential Privacy to Enhance ABS Disclosure Protection Methods"

Abstract: There has been strong interest in differential privacy (DP) among the research community (including national statistical offices (NSOs)) since US Census Bureau announced the use of DP to release its decennial Census in 2018. The ABS has also been exploring how to use DP to quantify disclosure protection mechanisms and make data more accessible to users while maintaining confidentiality. I will talk about two pieces of recent research of interest. The first collaborative project with UNSW connects the ABS perturbation methodology and the DP framework to improve the design a mechanism for protecting aggregate statistics.  The second collaborative project with Harvard University and ANU applies a DP framework for synthetic data generation to evaluate DP performance. These research outputs will help inform how NSOs balance the trade-off between utility and risk of disclosure.

(slides)


Bio: Dr Joseph (Chien-Hung) Chien has been at the ABS for over 20 years and is currently the director of the Data Access and Confidentiality Methodology Unit (DACMU). Joseph's PhD research analysed administrative data to better understand the microdrivers of productivity. His research interests include productivity analysis, network modelling, semantic web and synthetic data. Joseph is interested in advancing a synthetic data approach at the ABS to make its data more accessible. As the director of DACMU, Joseph is responsible for managing the confidentiality methodology research program. DACMU is currently exploring synthetic data approach to create safe microdata for research, developing tools to streamline the output protection process in the ABS DataLab and exploring differential privacy approaches that provide utility while maintaining confidentiality. 


Selected Publications: 

Salil Vadhan (Harvard University, visiting U. Sydney): "OpenDP: A Community Effort to Advance the Practice of Differential Privacy"

Abstract: I will describe OpenDP, a community effort to advance the practice of differential privacy, in part by building a trustworthy and open-source suite of differential privacy tools that can be easily adopted by custodians of sensitive data to make it available for research and exploration in the public interest.  My focus will be on how the privacy scholars and practitioners in Australia and the Asia-Pacific region can become part of the OpenDP Community, through using its software and other resources or through contributing in a variety of ways.


The core of the OpenDP software is the free and open-source OpenDP Library, which is a modular collection of statistical algorithms that adhere to the definition of differential privacy. It can be used to build applications of privacy-preserving computations, using a number of different models of privacy. OpenDP is implemented in Rust, with bindings for easy use from Python or R.  The architecture of the OpenDP Library is based on a flexible framework for expressing privacy-aware computations, coming from the paper A Programming Framework for OpenDP (Gaboardi, Hay, Vadhan 2020).  I will give a conceptual introduction to this framework, leading into the subsequent, hands-on tutorial by Michael Shoemate.

(slides)


Bio: Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at the Harvard John A. Paulson School of Engineering & Applied Sciences, currently on leave as a Visiting Researcher at the University of Sydney. He is also a Faculty Director of the OpenDP open-source software project.  Vadhan’s research in theoretical computer science spans computational complexity, data privacy, and cryptography. His honors include a Simons Investigator Award, a Guggenheim Fellowship, and a Gödel Prize.

Michael Shoemate (Harvard University): "OpenDP Tutorial: From the Framework to the Library"

Abstract: I will give a hands-on tutorial on using the free and open-source OpenDP Library, which is a modular collection of statistical algorithms that adhere to the definition of differential privacy. It can be used to build applications of privacy-preserving computations, using a number of different models of privacy. OpenDP is implemented in Rust, with bindings for easy use from Python or R.  We will demonstrate how to use the common library APIs in Python to perform differentially private statistical analysis of sensitive datasets.  We will also show how you can build your own, customized differentially private methods using the library, and potentially contribute them back for wider use by the OpenDP Community.  Bring your laptops if you’d like to follow along with the live notebook examples!


Bio: Michael Shoemate leads development on the OpenDP Library, a widely-used open-source differential privacy library. He collaborates with researchers to adapt differentially private algorithms into trustworthy and accessible software tools, and works with analysts to apply differential privacy to their use-cases. He obtained a Masters in Statistics from the University of Texas at Dallas in 2019.

Clément Canonne (University of Sydney): "Differential Privacy for Policymakers"

Abstract: In this talk, I will provide an overview of differential privacy, one of the leading approaches to data privacy, with a focus on what guarantees it provides, why these guarantees are meaningful and necessary, how to understand and think about its "privacy parameters," and when to use (or not!) differential privacy. While this talk does not pretend to provide any definitive answer to these questions, the hope is that it can serve as a basis for the application and deployment of differential privacy.

(slides)


Bio: Clément Canonne is a Senior Lecturer in the School of Computer Science of the University of Sydney, an ARC DECRA Fellow, and a 2023 NSW Young Tall Poppy. He obtained his Ph.D. in 2017 from Columbia University, before joining Stanford as a Motwani Postdoctoral Fellow, then IBM Research as a Goldstine Postdoctoral Fellow. His research interests span distribution testing and learning theory; focusing, in particular, on differential privacy, and the computational aspects of learning and statistical inference subject to resource or information constraints.

Kimberlee Weatherall (University of Sydney): Australian privacy policy 2024: where are we going, and how far can PETs get us?

Abstract: Privacy enhancing technologies are an important technical contribution to addressing some of the privacy challenges that we face today: but it is critical to understand where, precisely, they fit within a shifting policy landscape. Australia is on the precipice of potentially significant technology policy shifts; both in our data protection law and in the area of AI regulation. My talk will seek to explain how Australian data and privacy policy could shift in the near future, how PETs might fit into the picture – and their limits in addressing the broader data dilemmas we face. 

(slides)


Bio: Kimberlee is a Professor of Law at the University of Sydney focusing on the regulation of technology and intellectual property law, and a Chief Investigator with the ARC Centre of Excellence for Automated Decision-Making and Society. She is a Fellow at the Gradient Institute, a research institute developing ethical AI, and a research affiliate of the Humanising Machine Intelligence group at the Australian National University, and a co-chair of the Australian Computer Society’s Advisory Committee on AI Ethics. She is the co-host of IP Provocations, a podcast asking challenging questions about IP law.

Ben Rubinstein (University of Melbourne): Data Privacy: What Could Possibly Go Wrong?

Abstract: Data privacy is not the remit of any one discipline or profession. Law, computer science, statistics, economics, psychology all have important roles to play in nurturing robust guard rails to protect this important human right. What could possibly go wrong when the lessons of a key discipline are ignored? We’ll explore historical case studies from the battlefield practice of data privacy in the Australian context – including data releases that failed to keep sensitive data private, and (at best) partially conceived government policy. In these cases, in retrospect, a computer science perspective is helpful: computer security advocates for threat model thinking; analogously, theoretical computer science prefers falsifiable definitions that can be verified; responsibly disclosed attacks are welcomed (and not legislated post-hoc as illegal!) to test fit for purpose and expose weaknesses (not create them).

(slides)


Bio: Ben is a Professor of Computing and Information Systems and Deputy Dean (Research) at the University of Melbourne. Trained in machine learning (PhD Berkeley EECS 2010), his research has focussed on differential privacy, data linking in databases, adversarial machine learning. As an academic, he’s worked with the ABS, U.S. Census Bureau, NAB, Transport for NSW, Meta, and Google on some of these topics. After grad school, Ben worked at Microsoft Research (SVC) then IBM Research Australia before joining UoM in 2013. He’s currently lead of the joint MURI-AUSMURI on Cybersecurity Assurance for teams of Computers and Humans (CATCH).