Show Sidebar Hide Sidebar

FastICA on 2D point clouds in Scikit-learn

This example illustrates visually in the feature space a comparison by results using two different component analysis techniques.

Independent component analysis (ICA) vs Principal component analysis (PCA).

Representing ICA in the feature space gives the view of ‘geometric ICA’: ICA is an algorithm that finds directions in the feature space corresponding to projections with high non-Gaussianity. These directions need not be orthogonal in the original feature space, but they are orthogonal in the whitened feature space, in which all directions correspond to the same variance.

PCA, on the other hand, finds orthogonal directions in the raw feature space that correspond to directions accounting for maximum variance.

Here we simulate independent sources using a highly non-Gaussian process, 2 student T with a low number of degrees of freedom (top left figure). We mix them to create observations (top right figure). In this raw observation space, directions identified by PCA are represented by orange vectors. We represent the signal in the PCA space, after whitening by the variance corresponding to the PCA vectors (lower left). Running ICA corresponds to finding a rotation in this space to identify the directions of largest non-Gaussianity (lower right).

New to Plotly?

Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!


In [1]:
import sklearn


This tutorial imports PCA and FastICA.

In [2]:
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools

import numpy as np
from sklearn.decomposition import PCA, FastICA
Automatically created module for IPython interactive environment


Generate sample data

In [3]:
rng = np.random.RandomState(42)
S = rng.standard_t(1.5, size=(20000, 2))
S[:, 0] *= 2.

# Mix data
A = np.array([[1, 1], [0, 2]])  # Mixing matrix

X =, A.T)  # Generate observations

pca = PCA()
S_pca_ =

ica = FastICA(random_state=rng)
S_ica_ =  # Estimate the sources

S_ica_ /= S_ica_.std(axis=0)

Plot results

In [4]:
def plot_samples(S, fig, row, col, axis_list=None):
    trace = go.Scatter(x=S[:, 0],
                       y=S[:, 1], 
                       marker=dict(size=2, color='white',
                                   line=dict(color='steelblue', width=1)
    fig.append_trace(trace, row, col)
    if axis_list is not None:
        colors = ['orange', 'red']
        for color, axis in zip(colors, axis_list):
            axis /= axis.std()
            x_axis, y_axis = axis
            # Trick to get legend to work
            for i in range(len(x_axis)):
                trace1 = go.Scatter(x=[0, x_axis[i]], 
                                    y=[0, y_axis[i]],
                fig.append_trace(trace1, row, col)
In [5]:
fig = tools.make_subplots(rows=2, cols=2,
                          subplot_titles=('True Independent Sources', 'Observations',
                                          'PCA recovered signals', 'ICA recovered signals')

plot_samples(S / S.std(), fig, 1, 1)

axis_list = [pca.components_.T, ica.mixing_]
plot_samples(X / np.std(X), fig, 1, 2, axis_list=axis_list,)

plot_samples(S_pca_ / np.std(S_pca_, axis=0), fig, 2, 1)

plot_samples(S_ica_ / np.std(S_ica_), fig, 2, 2)

for k in map(str, range(1, 5)):
    x = 'xaxis' + k
    y = 'yaxis' + k
    fig['layout'][x].update(showticklabels=False, ticks='',
                            range=[-3, 3])
    fig['layout'][y].update(showticklabels=False, ticks='',
                            range=[-3, 3])
In [6]:
The draw time for this plot will be slow for all clients.



    Alexandre Gramfort, 

    Gael Varoquaux


    BSD 3 clause
Still need help?
Contact Us

For guaranteed 24 hour response turnarounds, upgrade to a Developer Support Plan.