Detecting OODs as datapoints with High Uncertainty

Loading...
Thumbnail Image

Related Collections

Degree type

Discipline

Subject

CPS Safe Autonomy
Computer Engineering
Computer Sciences

Funder

Grant number

License

Copyright date

Distributor

Related resources

Contributor

Abstract

Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs). This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques detect OODs as datapoints with either high epistemic uncertainty or high aleatoric uncertainty. We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric). We perform experiments on vision datasets with multiple DNN architectures, achieving state-of-the-art results in most cases.

Advisor

Date of presentation

2021-07-23

Conference name

Departmental Papers (CIS)

Conference dates

2023-05-18T00:57:08.000

Conference location

Date Range for Data Collection (Start Date)

Date Range for Data Collection (End Date)

Digital Object Identifier

Series name and number

Volume number

Issue number

Publisher

Publisher DOI

relationships.isJournalIssueOf

Comments

Workshop on Uncertainty and Robustness in Deep Learning (ICML 2021)(https://sites.google.com/view/udlworkshop2021/home), Virtually, July 23, 2021

Recommended citation

Collection