Show Sidebar Hide Sidebar

Explicit Feature Map Approximation for RBF Kernels in Scikit-learn

An example illustrating the approximation of the feature map of an RBF kernel.

It shows how to use RBFSampler and Nystroem to approximate the feature map of an RBF kernel for classification with an SVM on the digits dataset. Results using a linear SVM in the original space, a linear SVM using the approximate mappings and using a kernelized SVM are compared. Timings and accuracy for varying amounts of Monte Carlo samplings (in the case of RBFSampler, which uses random Fourier features) and different sized subsets of the training set (for Nystroem) for the approximate mapping are shown.

Please note that the dataset here is not large enough to show the benefits of kernel approximation, as the exact SVM is still reasonably fast.

Sampling more dimensions clearly leads to better classification results, but comes at a greater cost. This means there is a tradeoff between runtime and accuracy, given by the parameter n_components. Note that solving the Linear SVM and also the approximate kernel SVM could be greatly accelerated by using stochastic gradient descent via sklearn.linear_model.SGDClassifier. This is not easily possible for the case of the kernelized SVM.

The second plot visualized the decision surfaces of the RBF kernel SVM and the linear SVM with approximate kernel maps. The plot shows decision surfaces of the classifiers projected onto the first two principal components of the data. This visualization should be taken with a grain of salt since it is just an interesting slice through the decision surface in 64 dimensions. In particular note that a datapoint (represented as a dot) does not necessarily be classified into the region it is lying in, since it will not lie on the plane that the first two principal components span.

The usage of RBFSampler and Nystroem is described in detail in Kernel Approximation.

New to Plotly?

Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

Version

In [1]:
import sklearn
sklearn.__version__
Out[1]:
'0.18'

Imports

This tutorial imports RBFSampler, Nystroem and PCA.

In [2]:
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools

import numpy as np
from time import time
import matplotlib.pyplot as plt
from sklearn import datasets, svm, pipeline
from sklearn.kernel_approximation import (RBFSampler,
                                          Nystroem)
from sklearn.decomposition import PCA

Calculations

In [3]:
print(__doc__)

# The digits dataset
digits = datasets.load_digits(n_class=9)

# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.data)
data = digits.data / 16.
data -= data.mean(axis=0)

# We learn the digits on the first half of the digits
data_train, targets_train = data[:n_samples / 2], digits.target[:n_samples / 2]


# Now predict the value of the digit on the second half:
data_test, targets_test = data[n_samples / 2:], digits.target[n_samples / 2:]
#data_test = scaler.transform(data_test)

# Create a classifier: a support vector classifier
kernel_svm = svm.SVC(gamma=.2)
linear_svm = svm.LinearSVC()

# create pipeline from kernel approximation
# and linear svm
feature_map_fourier = RBFSampler(gamma=.2, random_state=1)
feature_map_nystroem = Nystroem(gamma=.2, random_state=1)
fourier_approx_svm = pipeline.Pipeline([("feature_map", feature_map_fourier),
                                        ("svm", svm.LinearSVC())])

nystroem_approx_svm = pipeline.Pipeline([("feature_map", feature_map_nystroem),
                                        ("svm", svm.LinearSVC())])

# fit and predict using linear and kernel svm:

kernel_svm_time = time()
kernel_svm.fit(data_train, targets_train)
kernel_svm_score = kernel_svm.score(data_test, targets_test)
kernel_svm_time = time() - kernel_svm_time

linear_svm_time = time()
linear_svm.fit(data_train, targets_train)
linear_svm_score = linear_svm.score(data_test, targets_test)
linear_svm_time = time() - linear_svm_time

sample_sizes = 30 * np.arange(1, 10)
fourier_scores = []
nystroem_scores = []
fourier_times = []
nystroem_times = []

for D in sample_sizes:
    fourier_approx_svm.set_params(feature_map__n_components=D)
    nystroem_approx_svm.set_params(feature_map__n_components=D)
    start = time()
    nystroem_approx_svm.fit(data_train, targets_train)
    nystroem_times.append(time() - start)

    start = time()
    fourier_approx_svm.fit(data_train, targets_train)
    fourier_times.append(time() - start)

    fourier_score = fourier_approx_svm.score(data_test, targets_test)
    nystroem_score = nystroem_approx_svm.score(data_test, targets_test)
    nystroem_scores.append(nystroem_score)
    fourier_scores.append(fourier_score)
Automatically created module for IPython interactive environment

Plotting Classification accuracy and Training Times

In [4]:
fig = tools.make_subplots(rows=2, cols=1, 
                          subplot_titles=('Classification accuracy', 
                                           'Training times'))

accuracy_trace1 = go.Scatter(x=sample_sizes, y=nystroem_scores,
                             name="Nystroem approx. kernel",
                             mode="lines", line=dict(
                                                 color="rgb(65,105,255)"))
timescale_trace1 = go.Scatter(x=sample_sizes, y=nystroem_times, 
                              mode="lines", line=dict(
                                                  color="rgb(65,105,255)",
                                                  dash="dash"),
                              name='Nystroem approx. kernel')

accuracy_trace2= go.Scatter(x=sample_sizes, y=fourier_scores, 
                            name="Fourier approx. kernel",
                            mode="lines", line=dict(
                                                color="green"))
timescale_trace2 = go.Scatter(x=sample_sizes, y=fourier_times,
                               mode="lines", line=dict(
                                                   color="green", 
                                                   dash="dash"),
                              name='Fourier approx. kernel')

# horizontal lines for exact rbf and linear kernels:
accuracy_trace3 = go.Scatter(x=[sample_sizes[0], sample_sizes[-1]],
                             y=[linear_svm_score, linear_svm_score], 
                             name="linear svm",
                             mode="lines", line=dict(
                                                 color="red"))
timescale_trace3 = go.Scatter(x=[sample_sizes[0], sample_sizes[-1]],
                              y=[linear_svm_time, linear_svm_time],
                              mode="lines", line=dict(
                                                  color="red", dash="dash"),
                              name='linear svm')

accuracy_trace4=go.Scatter(x=[sample_sizes[0], sample_sizes[-1]],
                           y=[kernel_svm_score, kernel_svm_score], 
                           name="rbf svm",
                           mode="lines", line=dict(
                                               color="rgb(135,206,235)"))
timescale_trace4 = go.Scatter(x=[sample_sizes[0], sample_sizes[-1]],
                              y= [kernel_svm_time, kernel_svm_time],name='rbf svm',
                              mode="lines", line=dict(
                                                  color="rgb(135,206,235)",
                                                  dash="dash"))

# vertical line for dataset dimensionality = 64
accuracy_trace5= go.Scatter(x=[64, 64], y=[0.7, 1], name="n_features",
                            mode="lines", line=dict(
                                                color="rgb(138,43,238)"))

for i in [accuracy_trace1,accuracy_trace2,
          accuracy_trace3,accuracy_trace4,
          accuracy_trace5]:
    fig.append_trace(i , 1, 1)

for i in [timescale_trace1,timescale_trace2,
          timescale_trace3,timescale_trace4]:
    fig.append_trace(i , 2, 1)
        
fig['layout'].update(height = 900)
fig['layout']['xaxis1'].update(showticklabels=False, 
                               range=[sample_sizes[0], 
                                      sample_sizes[-1]])
fig['layout']['yaxis1'].update(title='Classification accuracy',
                               range=[np.min(fourier_scores), 1])

fig['layout']['xaxis2'].update(title='Sampling steps = transformed feature dimension')
fig['layout']['yaxis2'].update(title='Training time in seconds')


py.iplot(fig, filename="accuracy-training")
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x2,y2 ]

Out[4]:

Plotting SVC Kernel

In [5]:
fig1 = tools.make_subplots(rows=1, cols=3, subplot_titles=(
                          'SVC with rbf kernel',
                          'SVC (linear kernel)<br> with Fourier rbf feature map<br>'+
                          'n_components=100',
                          'SVC (linear kernel)<br> with Nystroem rbf feature map<br>'+
                          'n_components=100'))

# visualize the decision surface, projected down to the first
# two principal components of the dataset
pca = PCA(n_components=8).fit(data_train)

X = pca.transform(data_train)

# Generate grid along first two principal components
multiples = np.arange(-2, 2, 0.1)
# steps along first component
first = multiples[:, np.newaxis] * pca.components_[0, :]
# steps along second component
second = multiples[:, np.newaxis] * pca.components_[1, :]
# combine
grid = first[np.newaxis, :, :] + second[:, np.newaxis, :]
flat_grid = grid.reshape(-1, data.shape[1])

# title for the plots
titles = []
def matplotlib_to_plotly(cmap, pl_entries):
    h = 1.0/(pl_entries-1)
    pl_colorscale = []
    
    for k in range(pl_entries):
        C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
        pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
        
    return pl_colorscale

contour_list=[]

scatter_list=[]
# predict and plot
for i, clf in enumerate((kernel_svm, nystroem_approx_svm,
                         fourier_approx_svm)):
    # Plot the decision boundary. For that, we will assign a color to each
    # point in the mesh [x_min, x_max]x[y_min, y_max].
    Z = clf.predict(flat_grid)

    # Put the result into a color plot
    Z = Z.reshape(grid.shape[:-1])
    trace1 = go.Contour(x=multiples, y=multiples, z=Z,
                        line=dict(smoothing=0.85),
                        contours=dict( coloring='heatmap'),
                        colorscale= matplotlib_to_plotly(plt.cm.Paired,300),
                        opacity = 0.7, showscale=False)
    
    contour_list.append(trace1)

    # Plot also the training points
    trace2 = go.Scatter(x=X[:, 0], y= X[:, 1], mode="markers",
                        showlegend=False,
                        marker=dict(
                                size=8, color=targets_train, 
                                colorscale=matplotlib_to_plotly(plt.cm.Paired,300),
                                showscale=False, 
                                line = dict(width=1))
                       )
    scatter_list.append(trace2)   
    
for i in range(0,3):
    fig1.append_trace(contour_list[i], 1, i+1)
    fig1.append_trace(scatter_list[i], 1, i+1)

for i in range(1,4):
    x='xaxis'+str(i)
    y='yaxis'+str(i)
    fig1['layout'][x].update(showticklabels=False,range=[-2,1.8],
                                   zeroline=False,  ticklen=0)

    fig1['layout'][y].update(showticklabels=False, range=[-2,1.8],
                               zeroline=False, ticklen=0)

py.iplot(fig1, filename="svm")
This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]  [ (1,3) x3,y3 ]

Out[5]:

License

Author:

    Gael Varoquaux <gael.varoquaux@normalesup.org>
    Andreas Mueller <amueller@ais.uni-bonn.de>

License:

    BSD 3 clause
Still need help?
Contact Us

For guaranteed 24 hour response turnarounds, upgrade to a Developer Support Plan.