Show Sidebar Hide Sidebar

# Johnson-Lindenstrauss bound in Scikit-learn

The Johnson-Lindenstrauss bound for embedding with random projections

The Johnson-Lindenstrauss lemma states that any high dimensional dataset can be randomly projected into a lower dimensional Euclidean space while controlling the distortion in the pairwise distances.

### Theoretical bounds¶

The distortion introduced by a random projection p is asserted by the fact that p is defining an eps-embedding with good probability as defined by:

Where u and v are any rows taken from a dataset of shape [n_samples, n_features] and p is a projection by a random Gaussian N(0, 1) matrix with shape [n_components, n_features] (or a sparse Achlioptas matrix).

The minimum number of components to guarantees the eps-embedding is given by:

he first plot shows that with an increasing number of samples n_samples, the minimal number of dimensions n_components increased logarithmically in order to guarantee an eps-embedding.

The second plot shows that an increase of the admissible distortion eps allows to reduce drastically the minimal number of dimensions n_components for a given number of samples n_samples

### Empirical validation¶

We validate the above bounds on the digits dataset or on the 20 newsgroups text document (TF-IDF word frequencies) dataset:

• for the digits dataset, some 8x8 gray level pixels data for 500 handwritten digits pictures are randomly projected to spaces for various larger number of dimensions n_components.

• for the 20 newsgroups dataset some 500 documents with 100k features in total are projected using a sparse random matrix to smaller euclidean spaces with various values for the target number of dimensions n_components.

The default dataset is the digits dataset. To run the example on the twenty newsgroups dataset, pass the –twenty-newsgroups command line argument to this script.

For each value of n_components, we plot:

• 2D distribution of sample pairs with pairwise distances in original and projected spaces as x and y axis respectively.
• 1D histogram of the ratio of those distances (projected / original).

We can see that for low values of n_components the distribution is wide with many distorted pairs and a skewed distribution (due to the hard limit of zero ratio on the left as distances are always positives) while for larger values of n_components the distortion is controlled and the distances are well preserved by the random projection.

#### New to Plotly?¶

You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

### Version¶

In [1]:
import sklearn
sklearn.__version__

Out[1]:
'0.18'

### Imports¶

In [2]:
from plotly import tools
import plotly.plotly as py
import plotly.graph_objs as go

import sys
from time import time
import numpy as np
import matplotlib.pyplot as plt

from sklearn.random_projection import johnson_lindenstrauss_min_dim
from sklearn.random_projection import SparseRandomProjection
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.metrics.pairwise import euclidean_distances


### Part 1¶

Plot the theoretical dependency between n_components_min and n_samples

In [3]:
print(__doc__)

fig1 = tools.make_subplots(rows=1, cols=2, subplot_titles=(
'Johnson-Lindenstrauss bounds:<br>n_samples vs n_components',
'Johnson-Lindenstrauss bounds:<br>n_components vs eps'))

def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []

for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])

return pl_colorscale
c = matplotlib_to_plotly(plt.cm.Blues,500)

eps_range = np.linspace(0.1, 0.99, 5)

# range of number of samples (observation) to embed
n_samples_range = np.logspace(1, 9, 9)

names=[]

for eps in eps_range:
l = ("eps = " + str(eps))
names.append(l)

colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))
trace=[]
j=0

for eps, color in zip(eps_range, colors):

if j==0:
col = c[90][1]
else:
col = c[j*100][1]

min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)
trace=go.Scatter(x=n_samples_range, y=min_n_components,mode="lines",
line=dict(color = col),
name=names[j])
fig1.append_trace(trace, 1, 1)
j = j+1
if(j>3):
break
fig1['layout']['xaxis1'].update(title='Number of observations to eps-embed',
type="log")
fig1['layout']['yaxis1'].update(title='Minimum number of dimensions',
type="log")
eps_range = np.linspace(0.01, 0.99, 100)
names2 = []

for n in n_samples_range:
l = ("n_samples = "+str(n))
names2.append(l)

# range of number of samples (observation) to embed
n_samples_range = np.logspace(2, 6, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))

i=0
for n_samples, color in zip(n_samples_range,colors):
if i==0:
col = c[90][1]
else:
col = c[i*100][1]

min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)
trace1= go.Scatter(x=eps_range, y=min_n_components, mode="lines",
line=dict(color = col),
name=names2[i])
fig1.append_trace(trace1, 1, 2)
i=i+1

fig1['layout']['xaxis2'].update(title='Distortion eps')
fig1['layout']['yaxis2'].update(title='Minimum number of dimensions',
type="log")
fig1['layout'].update(width= 1000)

py.iplot(fig1, filename="J-L-1")

Automatically created module for IPython interactive environment
This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]


Out[3]:

### Part 2¶

Perform sparse random projection of some digits images which are quite low dimensional and dense or documents of the 20 newsgroups dataset which is both high dimensional and sparse

In [4]:
if '--twenty-newsgroups' in sys.argv:
# Need an internet connection hence not enabled by default
data = fetch_20newsgroups_vectorized().data[:500]
else:

n_samples, n_features = data.shape
print("Embedding %d samples with dim %d using various random projections"
% (n_samples, n_features))

n_components_range = np.array([300, 1000, 10000])
dists = euclidean_distances(data, squared=True).ravel()

# select only non-identical samples pairs
nonzero = dists != 0
dists = dists[nonzero]

tracen = []
hist_trace = []

for n_components in n_components_range:
t0 = time()
rp = SparseRandomProjection(n_components=n_components)
projected_data = rp.fit_transform(data)
print("Projected %d samples from %d to %d in %0.3fs"
% (n_samples, n_features, n_components, time() - t0))

if hasattr(rp, 'components_'):
n_bytes = rp.components_.data.nbytes
n_bytes += rp.components_.indices.nbytes
print("Random matrix with size: %0.3fMB" % (n_bytes / 1e6))

projected_dists = euclidean_distances(
projected_data, squared=True).ravel()[nonzero]

trace3 = go.Histogram2d(x=dists, y=projected_dists, histnorm='probability',
colorscale=matplotlib_to_plotly(plt.cm.PuBu,100),
showscale=False)

tracen.append(trace3)

rates = projected_dists / dists
print("Mean distances rate: %0.2f (%0.2f)"
% (np.mean(rates), np.std(rates)))

trace4 = go.Histogram(x=rates, histnorm='probability density', nbinsx=40,nbinsy=40,
showlegend=False,
marker=dict(color='blue',
line=dict(color='black',width=2))
)
hist_trace.append(trace4)

# TODO: compute the expected value of eps and add them to the previous plot
# as vertical lines / region

Embedding 500 samples with dim 64 using various random projections
Projected 500 samples from 64 to 300 in 0.016s
Random matrix with size: 0.029MB
Mean distances rate: 0.97 (0.07)
Projected 500 samples from 64 to 1000 in 0.026s
Random matrix with size: 0.095MB
Mean distances rate: 0.98 (0.04)

Projected 500 samples from 64 to 10000 in 0.316s
Random matrix with size: 0.961MB
Mean distances rate: 1.00 (0.02)

In [5]:
fig2 = tools.make_subplots(rows=1, cols=2, subplot_titles=(
'Pairwise distances distribution for n_components=300',
'Histogram of pairwise distance rates for n_components=300',
))

fig2.append_trace(tracen[0], 1, 1)
fig2.append_trace(hist_trace[0], 1, 2)

fig2['layout']['xaxis1'].update(title='Pairwise squared distances in original space',)
fig2['layout']['yaxis1'].update(title='Minimum number of dimensions',)

fig2['layout']['xaxis2'].update(title='Pairwise squared distances in original space',)
fig2['layout']['yaxis2'].update(title='Minimum number of dimensions',)

py.iplot(fig2, filename='J-L-2')

This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]


Out[5]:
In [6]:
fig3 = tools.make_subplots(rows=1, cols=2, subplot_titles=(
'Pairwise distances distribution for n_components=1000',
'Histogram of pairwise distance rates for n_components=1000',
))

fig3.append_trace(tracen[1], 1, 1)
fig3.append_trace(hist_trace[1], 1, 2)

fig3['layout']['xaxis1'].update(title='Pairwise squared distances in original space',)
fig3['layout']['yaxis1'].update(title='Minimum number of dimensions',)

fig3['layout']['xaxis2'].update(title='Pairwise squared distances in original space',)
fig3['layout']['yaxis2'].update(title='Minimum number of dimensions',)

py.iplot(fig3, filename='J-L-3')

This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]


Out[6]:
In [5]:
fig4 = tools.make_subplots(rows=1, cols=2, subplot_titles=(
'Pairwise distances distribution for n_components=1000',
'Histogram of pairwise distance rates for n_components=1000',
))

fig4.append_trace(tracen[2], 1, 1)
fig4.append_trace(hist_trace[2], 1, 2)

fig4['layout']['xaxis1'].update(title='Pairwise squared distances in original space',)
fig4['layout']['yaxis1'].update(title='Minimum number of dimensions',)

fig4['layout']['xaxis2'].update(title='Pairwise squared distances in original space',)
fig4['layout']['yaxis2'].update(title='Minimum number of dimensions',)

py.iplot(fig4, filename='J-L-4')

This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]


Out[5]:

### Remarks¶

According to the JL lemma, projecting 500 samples without too much distortion will require at least several thousands dimensions, irrespective of the number of features of the original dataset.

Hence using random projections on the digits dataset which only has 64 features in the input space does not make sense: it does not allow for dimensionality reduction in this case.

On the twenty newsgroups on the other hand the dimensionality can be decreased from 56436 down to 10000 while reasonably preserving pairwise distances.

Still need help?