Pearls from Pebbles: Improved Confidence Functions for Auto-labeling

University of Wisconsin-Madison
NeurIPS, 2024
Keywords : auto-labeling, self-supervision, self-training, active learning, selective classification, semi-supervised learning, uncertainty quantification, calibration, weak supervision.

Abstract

Auto-labeling is an important family of techniques that produce labeled training sets with minimum manual annotation. A prominent variant, threshold-based auto-labeling (TBAL), works by finding thresholds on a model's confidence scores above which it can accurately automatically label unlabeled data. However, many models are known to produce overconfident scores, leading to poor TBAL performance. While a natural idea is to apply off-the-shelf calibration methods to alleviate the overconfidence issue, we show that such methods fall short. Rather than experimenting with ad-hoc choices of confidence functions, we propose a framework for studying the optimal TBAL confidence function. We develop a tractable version of the framework to obtain Colander (Confidence functions for Efficient and Reliable Auto-labeling), a new post-hoc method specifically designed to maximize performance in TBAL systems. We perform an extensive empirical evaluation of Colander and compare it against methods designed for calibration. Colander achieves up to 60% improvement on coverage over the baselines while maintaining error level below 5% and using the same amount of labeled data.

SlidesLive Video Presentation (Works only on Chrome)

BibTeX

@inproceedings{ vishwakarma2024pearls,
        title={Pearls from Pebbles: Improved Confidence Functions for Auto-labeling},
        author={Harit Vishwakarma and Yi Chen and Sui Jiet Tay and Satya Sai Srinath Namburi GNVV and Frederic Sala and Ramya Korlakai Vinayak},
        booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
        year={2024},
        url={https://openreview.net/forum?id=96gXvFYWSE}
        }