Title: Big Security Issues of Big Foundation Models

Abstract: A big foundation model is like an “operating system” of the AI ecosystem. Specifically, a foundation model can be used as a general-purpose feature extractor to build various intelligent applications. In this talk, we will discuss the security of foundation models. In the first part of this talk, we will show that an attacker can embed a backdoor into a foundation model, such that multiple downstream classifiers built based on the backdoored foundation model  inherit the backdoor behavior simultaneously. Our work shows that an insecure foundation model is a single-point-of-failure of the AI ecosystem. AI companies often use our public data on the Internet to pre-train foundation models even if we did not authorize them to do so. Therefore, in the second part of this talk, we will discuss a method to audit unauthorized data use in pre-training foundation models.


UBC Crest The official logo of the University of British Columbia. Urgent Message An exclamation mark in a speech bubble. Caret An arrowhead indicating direction. Arrow An arrow indicating direction. Arrow in Circle An arrow indicating direction. Arrow in Circle An arrow indicating direction. Chats Two speech clouds. Facebook The logo for the Facebook social media service. Information The letter 'i' in a circle. Instagram The logo for the Instagram social media service. External Link An arrow entering a square. Linkedin The logo for the LinkedIn social media service. Location Pin A map location pin. Mail An envelope. Menu Three horizontal lines indicating a menu. Minus A minus sign. Telephone An antique telephone. Plus A plus symbol indicating more or the ability to add. Search A magnifying glass. Twitter The logo for the Twitter social media service. Youtube The logo for the YouTube video sharing service.