TrustML facilitates development of trustworthy machine-learning-based systems, i.e., systems that are reliable, secure, explainable, and ethical.
The cluster examines trust-related requirements in several life-critical domains, including medicine and aerospace, and investigates solutions for building trustworthy systems that professionals and the general public can reliably adopt.
Why Research in Trustworthy Machine Learning
Machine Learning (ML) is growing in importance to industry, the government, and society. Yet, widespread adoption of ML-based systems in many life-critical domains, e.g., healthcare and aerospace, is impeded by the lack of trust in these systems.
One major way to overcome the barrier of trust is to provide ways for professionals, such as doctors and scientists, to understand the reasons behind ML predictions rather than expecting the professionals to blindly follow the predictions. Further, ML-based systems have to provide security and safety guarantees, without which they are unfit for practical use, e.g., in control systems that operate complex equipment, robots, and drones. The ability to maintain data privacy is another crucial issue in domains that deal with human-centric and IP-protected data, e.g., population management. Ethics and fairness concerns are critical in domains such as finance and law, to ensure the systems do not perpetuate inequality and discriminatory biases.
To address this challenge, TrustML cluster brings together a diverse team of researchers from academia, industry, and the government, to (a) jointly investigate trust-centered requirements and pitfalls in a number of life-critical domains, including medicine, manufacturing, urban forestry, and control-system/aerospace, and (b) build techniques and processes for engineering trustworthy ML-based systems.
Through interdisciplinary collaborations and partnerships, we aim to build foundations for developing trustworthy ML-based systems, i.e., systems that are reliable, secure, privacy-preserving, explainable, and ethical. Such an effort will help prevent ML failures and biases, improving the trust of professionals and the general public in ML-based decision-making processes.