What do you mean by calibration quality? How can calibration quality be detected from the output of an algorithm?

A calibration, or reliability curve, is the standard way to assess the calibration quality of a classifier’s predictions. In order to create a calibration curve, the predicted scores are first binned into discrete intervals, such as deciles. If there are enough observations, more intervals tend to produce better plots. Within each bin, the average predicted probability of observations in that bin is plotted on the x-axis, and the overall proportion of positive labels is plotted on the y-axis.

A perfectly calibrated classifier is represented by a line with a slope of 1, meaning the overall proportion of positive labels is equal to the average predicted probability within each bin. If the average predicted probability trends higher than the observed proportions, the classifier is overestimating the actual probability of success, and if the observed proportions trend higher than the average predictions, the classifier is underestimating the success probability. In the example curve below, the classifier overestimates the actual success probability in the lower deciles and underestimates it in the upper deciles.

Author

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute