Show Sidebar Hide Sidebar

Histogram

A histogram is a chart which divides data into bins with a numeric range, and each bin gets a bar corresponding to the number of data points in that bin.

New to Plotly?

Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
We also have a quick-reference cheatsheet (new!) to help you get started!

Imports

This tutorial imports Plotly, Numpy, and Pandas.

In [1]:
import plotly.plotly as py
from plotly.tools import FigureFactory as FF

import numpy as np
import pandas as pd

Import Data

For this histogram example, we will import some real data.

In [2]:
import plotly.plotly as py
from plotly.tools import FigureFactory as FF

data = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/wind_speed_laurel_nebraska.csv')
df = data[0:10]

table = FF.create_table(df)
py.iplot(table, filename='wind-data-sample')
Out[2]:

Histogram

Using np.histogram() we can compute histogram data from a data array. This function returns the values of the histogram (i.e. the number for each bin) and the bin endpoints as well, which denote the intervals for which the histogram values correspond to.

In [3]:
import plotly.plotly as py
import plotly.graph_objs as go

data_array = np.array((data['10 Min Std Dev']))
hist_data = np.histogram(data_array)
binsize = hist_data[1][1] - hist_data[1][0]

trace1 = go.Histogram(
    x=data_array,
    histnorm='count',
    name='Histogram of Wind Speed',
    autobinx=False,
    xbins=dict(
        start=hist_data[1][0],
        end=hist_data[1][-1],
        size=binsize
    )
)

trace_data = [trace1]
layout = go.Layout(
    bargroupgap=0.3
)
fig = go.Figure(data=trace_data, layout=layout)
py.iplot(fig)
Out[3]:
In [4]:
hist_data
Out[4]:
(array([ 91, 104,  22,   2,   1,   0,   0,   0,   0,   1]),
 array([  0.91 ,   2.182,   3.454,   4.726,   5.998,   7.27 ,   8.542,
          9.814,  11.086,  12.358,  13.63 ]))
In [2]:
help(np.histogram)
Help on function histogram in module numpy.lib.function_base:

histogram(a, bins=10, range=None, normed=False, weights=None, density=None)
    Compute the histogram of a set of data.
    
    Parameters
    ----------
    a : array_like
        Input data. The histogram is computed over the flattened array.
    bins : int or sequence of scalars or str, optional
        If `bins` is an int, it defines the number of equal-width
        bins in the given range (10, by default). If `bins` is a
        sequence, it defines the bin edges, including the rightmost
        edge, allowing for non-uniform bin widths.
    
        .. versionadded:: 1.11.0
    
        If `bins` is a string from the list below, `histogram` will use
        the method chosen to calculate the optimal bin width and
        consequently the number of bins (see `Notes` for more detail on
        the estimators) from the data that falls within the requested
        range. While the bin width will be optimal for the actual data
        in the range, the number of bins will be computed to fill the
        entire range, including the empty portions. For visualisation,
        using the 'auto' option is suggested. Weighted data is not
        supported for automated bin size selection.
    
        'auto'
            Maximum of the 'sturges' and 'fd' estimators. Provides good
            all round performance
    
        'fd' (Freedman Diaconis Estimator)
            Robust (resilient to outliers) estimator that takes into
            account data variability and data size .
    
        'doane'
            An improved version of Sturges' estimator that works better
            with non-normal datasets.
    
        'scott'
            Less robust estimator that that takes into account data
            variability and data size.
    
        'rice'
            Estimator does not take variability into account, only data
            size. Commonly overestimates number of bins required.
    
        'sturges'
            R's default method, only accounts for data size. Only
            optimal for gaussian data and underestimates number of bins
            for large non-gaussian datasets.
    
        'sqrt'
            Square root (of data size) estimator, used by Excel and
            other programs for its speed and simplicity.
    
    range : (float, float), optional
        The lower and upper range of the bins.  If not provided, range
        is simply ``(a.min(), a.max())``.  Values outside the range are
        ignored. The first element of the range must be less than or
        equal to the second. `range` affects the automatic bin
        computation as well. While bin width is computed to be optimal
        based on the actual data within `range`, the bin count will fill
        the entire range including portions containing no data.
    normed : bool, optional
        This keyword is deprecated in Numpy 1.6 due to confusing/buggy
        behavior. It will be removed in Numpy 2.0. Use the ``density``
        keyword instead. If ``False``, the result will contain the
        number of samples in each bin. If ``True``, the result is the
        value of the probability *density* function at the bin,
        normalized such that the *integral* over the range is 1. Note
        that this latter behavior is known to be buggy with unequal bin
        widths; use ``density`` instead.
    weights : array_like, optional
        An array of weights, of the same shape as `a`.  Each value in
        `a` only contributes its associated weight towards the bin count
        (instead of 1). If `density` is True, the weights are
        normalized, so that the integral of the density over the range
        remains 1.
    density : bool, optional
        If ``False``, the result will contain the number of samples in
        each bin. If ``True``, the result is the value of the
        probability *density* function at the bin, normalized such that
        the *integral* over the range is 1. Note that the sum of the
        histogram values will not be equal to 1 unless bins of unity
        width are chosen; it is not a probability *mass* function.
    
        Overrides the ``normed`` keyword if given.
    
    Returns
    -------
    hist : array
        The values of the histogram. See `density` and `weights` for a
        description of the possible semantics.
    bin_edges : array of dtype float
        Return the bin edges ``(length(hist)+1)``.
    
    
    See Also
    --------
    histogramdd, bincount, searchsorted, digitize
    
    Notes
    -----
    All but the last (righthand-most) bin is half-open.  In other words,
    if `bins` is::
    
      [1, 2, 3, 4]
    
    then the first bin is ``[1, 2)`` (including 1, but excluding 2) and
    the second ``[2, 3)``.  The last bin, however, is ``[3, 4]``, which
    *includes* 4.
    
    .. versionadded:: 1.11.0
    
    The methods to estimate the optimal number of bins are well founded
    in literature, and are inspired by the choices R provides for
    histogram visualisation. Note that having the number of bins
    proportional to :math:`n^{1/3}` is asymptotically optimal, which is
    why it appears in most estimators. These are simply plug-in methods
    that give good starting points for number of bins. In the equations
    below, :math:`h` is the binwidth and :math:`n_h` is the number of
    bins. All estimators that compute bin counts are recast to bin width
    using the `ptp` of the data. The final bin count is obtained from
    ``np.round(np.ceil(range / h))`.
    
    'Auto' (maximum of the 'Sturges' and 'FD' estimators)
        A compromise to get a good value. For small datasets the Sturges
        value will usually be chosen, while larger datasets will usually
        default to FD.  Avoids the overly conservative behaviour of FD
        and Sturges for small and large datasets respectively.
        Switchover point is usually :math:`a.size \approx 1000`.
    
    'FD' (Freedman Diaconis Estimator)
        .. math:: h = 2 \frac{IQR}{n^{1/3}}
    
        The binwidth is proportional to the interquartile range (IQR)
        and inversely proportional to cube root of a.size. Can be too
        conservative for small datasets, but is quite good for large
        datasets. The IQR is very robust to outliers.
    
    'Scott'
        .. math:: h = \sigma \sqrt[3]{\frac{24 * \sqrt{\pi}}{n}}
    
        The binwidth is proportional to the standard deviation of the
        data and inversely proportional to cube root of ``x.size``. Can
        be too conservative for small datasets, but is quite good for
        large datasets. The standard deviation is not very robust to
        outliers. Values are very similar to the Freedman-Diaconis
        estimator in the absence of outliers.
    
    'Rice'
        .. math:: n_h = 2n^{1/3}
    
        The number of bins is only proportional to cube root of
        ``a.size``. It tends to overestimate the number of bins and it
        does not take into account data variability.
    
    'Sturges'
        .. math:: n_h = \log _{2}n+1
    
        The number of bins is the base 2 log of ``a.size``.  This
        estimator assumes normality of data and is too conservative for
        larger, non-normal datasets. This is the default method in R's
        ``hist`` method.
    
    'Doane'
        .. math:: n_h = 1 + \log_{2}(n) +
                        \log_{2}(1 + \frac{|g_1|}{\sigma_{g_1})}
    
            g_1 = mean[(\frac{x - \mu}{\sigma})^3]
    
            \sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}}
    
        An improved version of Sturges' formula that produces better
        estimates for non-normal datasets. This estimator attempts to
        account for the skew of the data.
    
    'Sqrt'
        .. math:: n_h = \sqrt n
        The simplest and fastest estimator. Only takes into account the
        data size.
    
    Examples
    --------
    >>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3])
    (array([0, 2, 1]), array([0, 1, 2, 3]))
    >>> np.histogram(np.arange(4), bins=np.arange(5), density=True)
    (array([ 0.25,  0.25,  0.25,  0.25]), array([0, 1, 2, 3, 4]))
    >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3])
    (array([1, 4, 1]), array([0, 1, 2, 3]))
    
    >>> a = np.arange(5)
    >>> hist, bin_edges = np.histogram(a, density=True)
    >>> hist
    array([ 0.5,  0. ,  0.5,  0. ,  0. ,  0.5,  0. ,  0.5,  0. ,  0.5])
    >>> hist.sum()
    2.4999999999999996
    >>> np.sum(hist*np.diff(bin_edges))
    1.0
    
    .. versionadded:: 1.11.0
    
    Automated Bin Selection Methods example, using 2 peak random data
    with 2000 points:
    
    >>> import matplotlib.pyplot as plt
    >>> rng = np.random.RandomState(10)  # deterministic random data
    >>> a = np.hstack((rng.normal(size=1000),
    ...                rng.normal(loc=5, scale=2, size=1000)))
    >>> plt.hist(a, bins='auto')  # plt.hist passes it's arguments to np.histogram
    >>> plt.title("Histogram with 'auto' bins")
    >>> plt.show()

Still need help?
Contact Us

For guaranteed 24 hour response turnarounds, upgrade to a Developer Support Plan.