Show Sidebar Hide Sidebar

This example fits an AdaBoosted decision stump on a non-linearly separable classification dataset composed of two “Gaussian quantiles” clusters (see sklearn.datasets.make_gaussian_quantiles) and plots the decision boundary and decision scores. The distributions of decision scores are shown separately for samples of class A and B. The predicted class label for each sample is determined by the sign of the decision score. Samples with decision scores greater than zero are classified as B, and are otherwise classified as A. The magnitude of a decision score determines the degree of likeness with the predicted class label. Additionally, a new dataset could be constructed containing a desired purity of class B, for example, by only selecting samples with a decision score above some value.

New to Plotly?¶

You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

Version¶

In [1]:
import sklearn
sklearn.__version__

Out[1]:
'0.18.1'

Imports¶

In [2]:
print(__doc__)

import plotly.plotly as py
import plotly.graph_objs as go

import numpy as np
import matplotlib.pyplot as plt

from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles

Automatically created module for IPython interactive environment


Calculations¶

In [3]:
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))

# Create and fit an AdaBoosted decision tree
algorithm="SAMME",
n_estimators=200)

bdt.fit(X, y)

plot_colors = ["blue","red"]
plot_step = 0.02
class_names = "AB"


Plot the decision boundaries¶

In [4]:
data = []

def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []

for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])

return pl_colorscale

In [5]:
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
y_ = np.arange(y_min, y_max, plot_step)
x_ = np.arange(x_min, x_max, plot_step)
xx, yy = np.meshgrid(x_, y_)

Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

cs = go.Contour(x=x_, y=y_, z=Z,
colorscale=matplotlib_to_plotly(plt.cm.Paired, 5),
showscale=False
)
data.append(cs)


Plot the training points

In [6]:
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
trace = go.Scatter(x=X[idx, 0][0], y=X[idx, 1][0],
mode='markers',
marker=dict(color=c,
colorscale=matplotlib_to_plotly(plt.cm.Paired, 5)),
name="Class %s" % n)
data.append(trace)

In [7]:
layout = go.Layout(title='Decision Boundary',
xaxis=dict(title='x'),
yaxis=dict(title='y')
)

fig = go.Figure(data=data, layout=layout)

py.iplot(fig)

Out[7]:

Plot the two-class decision scores¶

In [8]:
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
data = []

for i, n, c in zip(range(2), class_names, plot_colors):
trace = go.Histogram(x=twoclass_output[y == i],
nbinsx=10,
marker=dict(color=c),
name='Class %s' % n,
opacity=0.5
)
data.append(trace)

layout = go.Layout(title='Decision Scores',
barmode='overlay',
xaxis=dict(title='Score'),
yaxis=dict(title='Samples')
)

fig = go.Figure(data=data, layout=layout)

py.iplot(fig)

Out[8]:

Author:

     Noel Dawe <noel.dawe@gmail.com>



     BSD 3 clause