Show Sidebar Hide Sidebar

# HuberRegressor vs Ridge on Dataset with Strong Outliers in Scikit-learn

Fit Ridge and HuberRegressor on a dataset with outliers.

The example shows that the predictions in ridge are strongly influenced by the outliers present in the dataset. The Huber regressor is less influenced by the outliers since the model uses the linear loss for these. As the parameter epsilon is increased for the Huber regressor, the decision function approaches that of the ridge.

#### New to Plotly?¶

Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

### Version¶

In [1]:
import sklearn
sklearn.__version__

Out[1]:
'0.18.1'

### Imports¶

This tutorial imports make_regression, HuberRegressor and Ridge.

In [2]:
print(__doc__)

import plotly.plotly as py
import plotly.graph_objs as go

import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import HuberRegressor, Ridge

Automatically created module for IPython interactive environment


### Calculations¶

In [3]:
# Generate toy data.
rng = np.random.RandomState(0)
X, y = make_regression(n_samples=20, n_features=1, random_state=0, noise=4.0,
bias=100.0)

# Add four strong outliers to the dataset.
X_outliers = rng.normal(0, 0.5, size=(4, 1))
y_outliers = rng.normal(0, 2.0, size=4)
X_outliers[:2, :] += X.max() + X.mean() / 4.
X_outliers[2:, :] += X.min() - X.mean() / 4.
y_outliers[:2] += y.min() - y.mean() / 4.
y_outliers[2:] += y.max() + y.mean() / 4.
X = np.vstack((X, X_outliers))
y = np.concatenate((y, y_outliers))


### Plot Results¶

In [4]:
def data_to_plotly(x):
k = []

for i in range(0, len(x)):
k.append(x[i][0])

return k

In [5]:
data = []

p1 = go.Scatter(x=data_to_plotly(X), y=y,
mode='markers',
showlegend=False,
marker=dict(color='blue', size=6)
)
data.append(p1)
# Fit the huber regressor over a series of epsilon values.
colors = ['red', 'blue', 'yellow', 'magenta']

x = np.linspace(X.min(), X.max(), 7)
epsilon_values = [1.35, 1.5, 1.75, 1.9]

for k, epsilon in enumerate(epsilon_values):
huber = HuberRegressor(fit_intercept=True, alpha=0.0, max_iter=100,
epsilon=epsilon)
huber.fit(X, y)
coef_ = huber.coef_ * x + huber.intercept_
p2 = go.Scatter(x=x, y=coef_,
mode='lines',
line=dict(color=colors[k], width=1),
name="huber loss, %s" % epsilon)
data.append(p2)

# Fit a ridge regressor to compare it to huber regressor.
ridge = Ridge(fit_intercept=True, alpha=0.0, random_state=0, normalize=True)
ridge.fit(X, y)
coef_ridge = ridge.coef_
coef_ = ridge.coef_ * x + ridge.intercept_
p3 = go.Scatter(x=x, y=coef_,
mode='lines',
line=dict(color='green', width=1),
name="ridge regression")
data.append(p3)

layout = go.Layout(title="Comparison of HuberRegressor vs Ridge",
xaxis=dict(title='X', zeroline=False, showgrid=False),
yaxis=dict(title='Y', zeroline=False, showgrid=False),
hovermode='closest'
)
fig = go.Figure(data=data, layout=layout)

In [6]:
py.iplot(fig)

Out[6]:

Authors:

    Manoj Kumar mks542@nyu.edu



    BSD 3 clause