Title: Model Components Matter in Trustworthy Machine Learning
Abstract: Deep neural networks, whether convolutional networks or vision transformers, owe their success to well-designed architectural components, including skip connections and self-attention modules. Previous research has primarily focused on the benefits of these components, while paying little attention to the trustworthy issues they may introduce. Specifically, we could propose stronger attacks based on the unique architectural components of a model, while it is also probable to improve robustness via utilizing information exposed by these architectural components. This talk will introduce some component-based attacks and defense, and aim at remind researchers in deep learning to focus more on the secure model architecture design.