Title: Empowering Machine Unlearning through Model Sparsity
Abstract: In this talk, we will delve into machine unlearning (MU), a critical process for removing specific examples from machine learning models to comply with data regulations. To bridge the gap between exact and approximate unlearning, we will approach the MU problem from a novel model-based perspective: model sparsification through weight pruning. By conducting theoretical analysis and practical experiments, we will demonstrate the substantial improvements achieved by incorporating model sparsity to enhance multi-criteria unlearning while maintaining efficiency. Additionally, we will showcase the practical impacts of sparsity-aided MU in addressing challenges such as defending against backdoor attacks and augmenting transfer learning through coreset selection.