Uncertainty Calibration for Neural Network Output
Uncertainty is inherent in all prediction outcomes, which may arise from data distribution, model training, and even adversarial attacks. Human is able to articulate the confidence in their decisions in accordance with the uncertainty associated with their observation and reasoning. Even though a decision could go wrong, associating low confidence with the wrong decision would be helpful for downstream activities. Human’s accurate uncertainty estimation makes their decisions trustworthy even though their decision-making system is not perfect. Similarly, for autonomous systems, reliable estimation of the uncertainty is crucial for risk evaluation and human-system interaction. However, the uncertainty estimate from deep neural networks (DNNs) could be inaccurate (e.g., DNNs have high confidence wrong prediction at adversarial examples) and biased. In light of this, we investigate current techniques to calibrate the uncertainty measures. We propose a differentiable kernel density estimation based loss that can leverage prior knowledge and directly minimize the ECE error during training. Combining with the cross-entropy loss, the neural network can achieve high accuracy and become well-calibrated.