Title: Tracing Data in AI: Auditing Data Privacy with Membership Inference
Abstract: How can we quantitatively audit the privacy risks of training machine learning models on personal data? How can we reason about copyright violations in training data and measure the extent to which our data is used to train a model? How can we identify traces of training data in the synthetic outputs generated by language models? In this talk, I will present membership inference attacks as the core engine for performing these analyses and share results obtained using the open-source ML Privacy Meter tool. I will also discuss common misunderstandings and pitfalls of membership inference attacks that should be avoided when auditing privacy.
Bio: Reza Shokri is a Dean's Chair Associate Professor of Computer Science at NUS. His research focuses on data privacy and trustworthy machine learning. He is a recipient of the Asian Young Scientist Fellowship 2023, the Intel's 2023 Outstanding Researcher Award, the IEEE Security and Privacy Test-of-Time Award 2021, and the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on quantitative analysis of data privacy, and the Best Paper Award at ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2023 for his work on analyzing fairness in machine learning. He has also received the VMWare Early Career Faculty Award 2021, NUS Presidential Young Professorship 2018-2023, and faculty research awards from Meta 2021, Google 2021, Intel 2021, and NUS 2019. He was a visiting research professor at Microsoft in 2023-2024. He obtained his PhD from EPFL.