Title: Specifying Machine Learning Components with Conformal Prediction
Abstract: We study the question of how to build reliable programs out of machine learning components, which are intrinsically error prone. In recent years, conformal prediction has emerged as a promising strategy for quantifying uncertainty of blackbox machine learning models in a way that provides probabilistic guarantees. We propose to use conformal prediction to construct specifications for machine learning components, and then use abstract interpretation to propagate this uncertainty compositionally through the rest of the program. We demonstrate the efficacy of our approach to write programs over images that come with probabilistic guarantees.