Show Sidebar Hide Sidebar

# Feature Transformations with Ensembles of Trees in Scikit-learn

Transform your features into a higher dimensional, sparse space. Then train a linear model on these features.

First fit an ensemble of trees (totally random trees, a random forest, or gradient boosted trees) on the training set. Then each leaf of each tree in the ensemble is assigned a fixed arbitrary feature index in a new feature space. These leaf indices are then encoded in a one-hot fashion.

Each sample goes through the decisions of each tree of the ensemble and ends up in one leaf per tree. The sample is encoded by setting feature values for these leaves to 1 and the other feature values to 0. The resulting transformer has then learned a supervised, sparse, high-dimensional categorical embedding of the data.

#### New to Plotly?¶

You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

### Version¶

In [1]:
import sklearn
sklearn.__version__

Out[1]:
'0.18.1'

### Imports¶

In [2]:
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools

import numpy as np
np.random.seed(10)

from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.pipeline import make_pipeline


### Calculations¶

In [3]:
n_estimator = 10
X, y = make_classification(n_samples=80000)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
# It is important to train the ensemble of trees on a different subset
# of the training data than the linear regression model to avoid
# overfitting, in particular if the total number of leaves is
# similar to the number of training samples
X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train,
y_train,
test_size=0.5)

# Unsupervised transformation based on totally random trees
rt = RandomTreesEmbedding(max_depth=3, n_estimators=n_estimator,
random_state=0)

rt_lm = LogisticRegression()
pipeline = make_pipeline(rt, rt_lm)
pipeline.fit(X_train, y_train)
y_pred_rt = pipeline.predict_proba(X_test)[:, 1]
fpr_rt_lm, tpr_rt_lm, _ = roc_curve(y_test, y_pred_rt)

# Supervised transformation based on random forests
rf = RandomForestClassifier(max_depth=3, n_estimators=n_estimator)
rf_enc = OneHotEncoder()
rf_lm = LogisticRegression()
rf.fit(X_train, y_train)
rf_enc.fit(rf.apply(X_train))
rf_lm.fit(rf_enc.transform(rf.apply(X_train_lr)), y_train_lr)

y_pred_rf_lm = rf_lm.predict_proba(rf_enc.transform(rf.apply(X_test)))[:, 1]
fpr_rf_lm, tpr_rf_lm, _ = roc_curve(y_test, y_pred_rf_lm)

grd_enc = OneHotEncoder()
grd_lm = LogisticRegression()
grd.fit(X_train, y_train)
grd_enc.fit(grd.apply(X_train)[:, :, 0])
grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)

y_pred_grd_lm = grd_lm.predict_proba(
grd_enc.transform(grd.apply(X_test)[:, :, 0]))[:, 1]
fpr_grd_lm, tpr_grd_lm, _ = roc_curve(y_test, y_pred_grd_lm)

# The gradient boosted model by itself
y_pred_grd = grd.predict_proba(X_test)[:, 1]
fpr_grd, tpr_grd, _ = roc_curve(y_test, y_pred_grd)

# The random forest model by itself
y_pred_rf = rf.predict_proba(X_test)[:, 1]
fpr_rf, tpr_rf, _ = roc_curve(y_test, y_pred_rf)


### ROC Curve¶

In [4]:
p1 = go.Scatter(x=[0, 1], y=[0, 1],
mode='lines',
showlegend=False,
line=dict(color='black', dash='dash')
)

p2 = go.Scatter(x=fpr_rt_lm, y=tpr_rt_lm,
mode='lines',
name='RT + LR')

p3 = go.Scatter(x=fpr_rf, y=tpr_rf,
mode='lines',
name='RF')

p4 = go.Scatter(x=fpr_rf_lm, y=tpr_rf_lm,
mode='lines',
name='RF + LR')

p5 = go.Scatter(x=fpr_grd, y=tpr_grd,
mode='lines',
name='GBT')

p6 = go.Scatter(x=fpr_grd_lm, y=tpr_grd_lm,
mode='lines',
name='GBT + LR')

layout = go.Layout(title='ROC curve',
xaxis=dict(title='False positive rate'),
yaxis=dict(title='True positive rate')
)
fig = go.Figure(data=[p1, p2, p3, p4, p5, p6], layout=layout)

In [5]:
py.iplot(fig)

Out[5]:

### ROC curve (zoomed in at top left)¶

In [6]:
p1 = go.Scatter(x=[0, 1], y=[0, 1],
mode='lines',
showlegend=False,
line=dict(color='black', dash='dash')
)

p2 = go.Scatter(x=fpr_rt_lm, y=tpr_rt_lm,
mode='lines',
name='RT + LR')

p3 = go.Scatter(x=fpr_rf, y=tpr_rf,
mode='lines',
name='RF')

p4 = go.Scatter(x=fpr_rf_lm, y=tpr_rf_lm,
mode='lines',
name='RF + LR')

p5 = go.Scatter(x=fpr_grd, y=tpr_grd,
mode='lines',
name='GBT')

p6 = go.Scatter(x=fpr_grd_lm, y=tpr_grd_lm,
mode='lines',
name='GBT + LR')

layout = go.Layout(title='ROC curve (zoomed in at top left)',
xaxis=dict(title='False positive rate', range=[0, 0.20]),
yaxis=dict(title='True positive rate', range=[0.80, 1])
)
fig = go.Figure(data=[p1, p2, p3, p4, p5, p6], layout=layout)

In [7]:
py.iplot(fig)

Out[7]:

Author:

    Tim Head <betatim@gmail.com>



    BSD 3 clause