Title: Empowering Machine Unlearning through Model Sparsity

Abstract: In this talk, we will delve into machine unlearning (MU), a critical process for removing specific examples from machine learning models to comply with data regulations. To bridge the gap between exact and approximate unlearning, we will approach the MU problem from a novel model-based perspective: model sparsification through weight pruning. By conducting theoretical analysis and practical experiments, we will demonstrate the substantial improvements achieved by incorporating model sparsity to enhance multi-criteria unlearning while maintaining efficiency. Additionally, we will showcase the practical impacts of sparsity-aided MU in addressing challenges such as defending against backdoor attacks and augmenting transfer learning through coreset selection.


UBC Crest The official logo of the University of British Columbia. Urgent Message An exclamation mark in a speech bubble. Caret An arrowhead indicating direction. Arrow An arrow indicating direction. Arrow in Circle An arrow indicating direction. Arrow in Circle An arrow indicating direction. Chats Two speech clouds. Facebook The logo for the Facebook social media service. Information The letter 'i' in a circle. Instagram The logo for the Instagram social media service. External Link An arrow entering a square. Linkedin The logo for the LinkedIn social media service. Location Pin A map location pin. Mail An envelope. Menu Three horizontal lines indicating a menu. Minus A minus sign. Telephone An antique telephone. Plus A plus symbol indicating more or the ability to add. Search A magnifying glass. Twitter The logo for the Twitter social media service. Youtube The logo for the YouTube video sharing service.