Show Sidebar Hide Sidebar

# K-means Clustering in Scikit-learn

The plots display firstly what a K-means algorithm would yield using three clusters. It is then shown what the effect of a bad initialization is on the classification process: By setting n_init to only 1 (default is 10), the amount of times that the algorithm will be run with different centroid seeds is reduced. The next plot displays what using eight clusters would deliver and finally the ground truth.

#### New to Plotly?¶

You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

### Version¶

In [1]:
import sklearn
sklearn.__version__

Out[1]:
'0.18'

### Imports¶

This tutorial imports KMeans.

In [2]:
print(__doc__)

import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools

import numpy as np

from sklearn.cluster import KMeans
from sklearn import datasets

Automatically created module for IPython interactive environment


### Plots¶

In [3]:
np.random.seed(5)

fig = tools.make_subplots(rows=2, cols=3,
print_grid=False,
specs=[[{'is_3d': True}, {'is_3d': True}, {'is_3d': True}],
[ {'is_3d': True, 'rowspan':1}, None, None]])
scene = dict(
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=2.5, y=0.1, z=0.1)
),
xaxis=dict(
range=[-1, 4],
title='Petal width',
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)',
showticklabels=False, ticks=''
),
yaxis=dict(
range=[4, 8],
title='Sepal length',
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)',
showticklabels=False, ticks=''
),
zaxis=dict(
range=[1,8],
title='Petal length',
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)',
showticklabels=False, ticks=''
)
)

centers = [[1, 1], [-1, -1], [1, -1]]
X = iris.data
y = iris.target

estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
init='random')}
fignum = 1
for name, est in estimators.items():
est.fit(X)
labels = est.labels_

trace = go.Scatter3d(x=X[:, 3], y=X[:, 0], z=X[:, 2],
showlegend=False,
mode='markers',
marker=dict(
color=labels.astype(np.float),
line=dict(color='black', width=1)
))
fig.append_trace(trace, 1, fignum)

fignum = fignum + 1

y = np.choose(y, [1, 2, 0]).astype(np.float)

trace1 = go.Scatter3d(x=X[:, 3], y=X[:, 0], z=X[:, 2],
showlegend=False,
mode='markers',
marker=dict(
color=y,
line=dict(color='black', width=1)))
fig.append_trace(trace1, 2, 1)

fig['layout'].update(height=900, width=900,
margin=dict(l=10,r=10))

fig['layout']['scene1'].update(scene)
fig['layout']['scene2'].update(scene)
fig['layout']['scene3'].update(scene)
fig['layout']['scene4'].update(scene)
fig['layout']['scene5'].update(scene)

In [4]:
py.iplot(fig)

Out[4]:

Code source:

         GaĆ«l Varoquaux



Modified for documentation by Jaques Grobler

        BSD 3 clause