Show Sidebar Hide Sidebar

Comparison of Calibration of Classifiers in Scikit-learn

Well calibrated classifiers are probabilistic classifiers for which the output of the predict_proba method can be directly interpreted as a confidence level. For instance a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a predict_proba value close to 0.8, approx. 80% actually belong to the positive class.

LogisticRegression returns well calibrated predictions as it directly optimizes log-loss. In contrast, the other methods return biased probabilities, with different biases per method:

  • GaussianNaiveBayes tends to push probabilities to 0 or 1 (note the counts in the histograms). This is mainly because it makes the assumption that features are conditionally independent given the class, which is not the case in this dataset which contains 2 redundant features.

  • RandomForestClassifier shows the opposite behavior: the histograms show peaks at approx. 0.2 and 0.9 probability, while probabilities close to 0 or 1 are very rare. An explanation for this is given by Niculescu-Mizil and Caruana [1]: “Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values. Because predictions are restricted to the interval [0,1], errors caused by variance tend to be one- sided near zero and one. For example, if a model should predict p = 0 for a case, the only way bagging can achieve this is if all bagged trees predict zero. If we add noise to the trees that bagging is averaging over, this noise will cause some trees to predict values larger than 0 for this case, thus moving the average prediction of the bagged ensemble away from 0. We observe this effect most strongly with random forests because the base-level trees trained with random forests have relatively high variance due to feature subseting.” As a result, the calibration curve shows a characteristic sigmoid shape, indicating that the classifier could trust its “intuition” more and return probabilities closer to 0 or 1 typically.

  • Support Vector Classification (SVC) shows an even more sigmoid curve as the RandomForestClassifier, which is typical for maximum-margin methods (compare Niculescu-Mizil and Caruana [1]), which focus on hard samples that are close to the decision boundary (the support vectors).

New to Plotly?

Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

Version

In [1]:
import sklearn
sklearn.__version__
Out[1]:
'0.18'

Imports

In [2]:
print(__doc__)

import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools

import numpy as np
np.random.seed(0)

from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve
Automatically created module for IPython interactive environment

Calculations

In [3]:
X, y = datasets.make_classification(n_samples=100000, n_features=20,
                                    n_informative=2, n_redundant=2)

train_samples = 100  # Samples used for training the models

X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]

# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)

Plots

In [4]:
fig = tools.make_subplots(rows=2, cols=1)

Perfectly_calibrated_trace = go.Scatter(x=[0, 1], y=[0, 1], name="Perfectly calibrated",
                                        mode='lines',
                                        line = dict(color='black', width=1,
                                                   dash='dash'),)
i=0
Calibration_Lines_Plot=[]
Calibration_Lines_Plot.append(Perfectly_calibrated_trace)
Calibration_Histograms_Plot=[]
colors=['blue','green','red','cyan']

for clf, name in [(lr, 'Logistic'),
                  (gnb, 'Naive Bayes'),
                  (svc, 'Support Vector Classification'),
                  (rfc, 'Random Forest')]:
    clf.fit(X_train, y_train)
    if hasattr(clf, "predict_proba"):
        prob_pos = clf.predict_proba(X_test)[:, 1]
    else:  # use decision function
        prob_pos = clf.decision_function(X_test)
        prob_pos = \
            (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
    fraction_of_positives, mean_predicted_value = \
        calibration_curve(y_test, prob_pos, n_bins=10)
    
    trace1 = go.Scatter(x=mean_predicted_value, y=fraction_of_positives,
                        line=dict(color=colors[i], width=1,),
                        name="%s" % (name))
    Calibration_Lines_Plot.append(trace1)
    
    trace2 = go.Histogram(x=prob_pos,  name=name, nbinsx=10,
                           marker=dict(color=colors[i]),
                           opacity=0.75, showlegend=False
                         )
    Calibration_Histograms_Plot.append(trace2)
    i=i+1

for i in range(len( Calibration_Lines_Plot)):
    fig.append_trace( Calibration_Lines_Plot[i], 1, 1)

for i in range(len( Calibration_Histograms_Plot)):
    fig.append_trace( Calibration_Histograms_Plot[i], 2, 1)

fig['layout']['yaxis1'].update(title='Fraction of positives',
                              range=[-0.05, 1.05])
fig['layout']['yaxis2'].update(title='Count')
fig['layout']['xaxis2'].update(title='Mean predicted value')

fig['layout'].update(title='Calibration plots  (reliability curve)',
                     barmode='overlay', height=1000)
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x2,y2 ]

In [6]:
py.iplot(fig)
Out[6]:

References

Predicting Good Probabilities with Supervised Learning, A. Niculescu-Mizil & R. Caruana, ICML 2005

License

Author:

    Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>

License:

    BSD Style.
Still need help?
Contact Us

For guaranteed 24 hour response turnarounds, upgrade to a Developer Support Plan.