This tutorial was generated from an Jupyter notebook. You can download the notebook here.
# Imports from __future__ in case we're running Python 2
from __future__ import division, print_function
from __future__ import absolute_import, unicode_literals
# Our numerical workhorses
import numpy as np
import pandas as pd
# Import pyplot for plotting
import matplotlib.pyplot as plt
# Seaborn, useful for graphics
import seaborn as sns
# We'll use Pybeeswarm
import beeswarm as bs
# Import Bokeh modules for interactive plotting
import bokeh.io
import bokeh.mpl
import bokeh.plotting
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables high res graphics inline (only use with static plots (non-Bokeh))
# SVG is preferred, but there is a bug in Jupyter with vertical lines
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style('darkgrid', rc=rc)
# Set up Bokeh for inline viewing
bokeh.io.output_notebook()
In this tutorial, we will learn how to load data stored on disk into a Python data structure. We will use pandas
to read in CSV (comma separated value) files and store the results in the very hand Pandas DataFrame
. We will then "play" with the data to get a feel for it. This process is called exploratory data analysis, a term coined by John Tukey, and is an important first step in analysis of a data set.
The data set we will use comes from a fun paper about the adhesive properties of frog tongues. The reference is Kleinteich and Gorb, Tongue adhesion in the horned frog Ceratophrys sp., Sci. Rep., 4, 5225, 2014. You can download the paper here. You might also want to check out a New York Times feature on the paper here.
In this paper, the authors investigated various properties of the adhesive characteristics of the tongues of horned frogs when they strike prey. The authors had a striking pad connected to a cantilever to measure forces. They also used high speed cameras to capture the strike and record relevant data.
As I mentioned in the last tutorial, we need to import modules we need for data analysis. An important addition is Pandas, which we import as pd
. As is good practice, I import everything at the top of the document.
def head(file_name, n_lines):
"""Display first `n_lines` lines of file."""
with open(file_name, 'r') as f:
i = 0
line = f.readline().rstrip()
while line != '' and i < n_lines:
line = f.readline().rstrip()
print(line)
i += 1
# Use our handy f
head('../data/kleinteich_and_gorb/frog_tongue_adhesion.csv', 20)
The first lines all begin with #
signs, signifying that they are comments and not data. They do give important information, though, such as the meaning of the ID data. The ID refers to which specific frog was tested.
Immediately after the comments, we have a row of comma-separated headers. This row sets the number of columns in this data set and labels the meaning of the columns. So, we see that the first column is the date of the experiment, the second column is the ID of the frog, the third is the trial number, and so on.
After this row, each row repesents a single experiment where the frog struck the target.
CSV files are generally a good way to store data. Commas make better delimiters than white space (such as tabs) because they have no portability issues. Delimiter collision is avoided by putting the data fields in double quotes when necessary. There are other good ways to store data, such as JSON, but we will almost exclusively use CSV files in this class.
We will use pd.read_csv()
to load the data set. The data are stored in a DataFrame, which is one of the data types that makes pandas
so convenient for use in data analysis. DataFrame
s offer mixed data types, including incomplete columns, and convenient slicing, among many, many other convenient features. We will use the DataFrame
to look at the data, at the same time demonstrating some of the power of DataFrame
s. They are like spreadsheets, only a lot better.
We will now load the DataFrame
. On my machine, the data are stored in the file ../data/kleinteich_and_gorb/frog_tongue_adhesion.csv
.
# Use pd.read_csv() to read in the data and store in a DataFrame
fname = '../data/kleinteich_and_gorb/frog_tongue_adhesion.csv'
df = pd.read_csv(fname, comment='#')
Notice that we used the kwarg comment
to specify that lines that begin with #
are comments and are to be ignored. If you check out the doc string for pd.read_csv
, you will see there are lots of options for reading in the data.
Let's jump right in and look at the contents of the DataFrame
. We can look at the first several rows using the df.head()
method.
# Look at the contents (first 10 indices)
df.head()
We see that the column headings were automatically assigned. pandas
also automatically defined the indices (names of the rows) as integers going up from zero. We could have defined the indices to be any of the columns of data.
To access a column of data, we use the following syntax.
# Slicing a column out of a DataFrame is achieved by using the column name
df['impact force (mN)']
The indexing of the rows is preserved, and we can see that we can easily extract all of the impact forces. Note, though, that pd.read_csv()
interpreted the data to be integer (dtype = int64
), so we may want to convert these to floats.
# Use df.astype() method to convert it to a NumPy float64 data type.
df['impact force (mN)'] = df['impact force (mN)'].astype(np.float64)
# Check that it worked
print('dtype = ', df['impact force (mN)'].dtype)
Now let's say we only want the impact force of strikes above one Newton. Pandas DataFrame
s can conveniently be sliced with Booleans.
# Generate True/False array of rows for indexing
inds = df['impact force (mN)'] > 1000.0
# Slice out rows we want (big force rows)
df_big_force = df[inds]
# Let's look at it; there will be only a few high-force values
df_big_force
So now we only have the strikes of high force. Note, though, that the original indexing of rows was retained! In our new DataFrame
with only the big force strikes, there is no index 4, for example.
# Executing the below will result in an exception
df_big_force['impact force (mN)'][4]
This might seem counterintuitive, but it is actually a good idea. Remember, indices do not have to be integers!
There is a way around this, though. We can use the iloc
attribute of a DataFrame
. This gives indexing with sequential integers. This is generally not advised, and I debated even showing this.
# Using iloc enables indexing by the corresponding sequence of integers
df_big_force['impact force (mN)'].iloc[4]
In our DataFrame
, the data are tidy. The concept of tidy data originally comes from development of databases, but has taken hold for more general data processing in recent years. See this paper by Hadley Wickham for a great discussion of this. Tidy data refers to data sets arranged in tabular form that have the following format.
As we will see, structuring the data in this way allows for very easy access. Furthermore, if you know a priori that a data set it tidy, there is little need for munging; you already know how to pull out what you need.
We will work on tidying messy data ("messy data" is anything that is not tidy) next week. For now, we will take advantage of the fact that Kleinteich and Korb generously provided us with tidy data.
Given that the data are tidy and one row represents a single experiment, we might like to extract a row and see all of the measured quantities from a given strike. This is also very easy with DataFrame
s using their ix
attribute.
# Let's slice out experiment with index 42
exp_42 = df.ix[42]
# And now let's look at it
exp_42
Notice how clever the DataFrame
is. We sliced a row out, and now the indices describing its elements are the column headings. This is a Pandas Series
object.
Now, slicing out index 42
is not very meaningful because the indices are arbitrary. Instead, we might look at our lab notebook, and want to look at trial number 3 on May 27, 2013, of frog III. This is a more common, and indeed meaningful, use case.
# Set up Boolean slicing
date = df['date'] == '2013_05_27'
trial = df['trial number'] == 3
ID = df['ID'] == 'III'
# Slide out the row
df.ix[date & trial & ID]
The difference here is that the returned slice is a DataFrame
and not a Series
. We can easily make it a Series
using iloc
. Note that in most use cases, this is just a matter of convenience for viewing and nothing more.
df.ix[date & trial & ID].iloc[0]
You may take issue with the rather lengthy syntax of access column names. I.e., if you were trying to access the ratio of the contact area with mucus to the contact area without mucus for trial number 3 on May 27, 2013, you would do the following.
# Set up criteria for our seach of the DataFrame
date = df['date'] == '2013_05_27'
trial = df['trial number'] == 3
ID = df['ID'] == 'III'
# When indexing DataFrames, use & for Boolean and (and | for or; ~ for not)
df['contact area with mucus / contact area without mucus'][date & trial & ID]
That syntax can be annoying. But many would argue that this is preferred because there is no ambiguity about what you are asking for. However, you may want to use shorter names. Conveniently, when your column names do not have spaces, you can use attribute access. For example....
# Attribute access
df.date[:10]
Attribute access is really for convenience and is not the default, but if can make writing the code much less cumbersome. So, let's change the name of the colums as
'trial number' → 'trial'
'contact area with mucus / contact area without mucus' → 'ca_ratio'
DataFrame
s have a nice rename
method to do this.
# Make a dictionary to rename columns
rename_dict = {'trial number' : 'trial',
'contact area with mucus / contact area without mucus' : 'ca_ratio'}
# Rename the columns
df = df.rename(columns=rename_dict)
# Try out our new column name
df.ca_ratio[42]
Notice that we have introduced a new Python data structure, the dictionary. A dictionary is a collection of objects, each one with a key to use for indexing. For example, in rename_dict
, we could get what we wanted to rename 'trial number'
.
# Indexing of dictionaries looks syntactially similar to cols in DataFrames
rename_dict['trial number']
We can go on and on about indexing Pandas DataFrame
s, because there are many ways to do it. For much more on all of the clever ways you can access data and subsets thereof in DataFrames, see the pandas
docs on indexing.
However, if our data are tidy, there is really no need to go beyond what we have done here. You really only need to generate Boolean querying arrays for tidy data, and you get a tidy data structure out that has exactly what you want. So, we will not really go any more into indexing in the course, except where needed to make data tidy.
"Exploratory data analysis" is the time during data analysis where you explore your data. You look at numbers, plot various quantities, and think about what analysis techniques you would like to use. pandas
DataFrames help you do this.
As we go through the interactive analysis, we will learn about syntax for various matplotlib
plot styles.
The first thing we'll do is look at strike forces. We'll rename the column for impact force for convenient attribute access because we'll access it many times.
df = df.rename(columns={'impact force (mN)': 'impf'})
Now, let's start plotting the impact forces to see what we're dealing with. We'll start with the most naive plot.
# Just make a scatter plot of forces
plt.plot(df.impf, 'o')
plt.margins(0.02)
plt.xlabel('order in DataFrame')
plt.ylabel('impact force (mN)')
The $x$-axis is pretty meaningless. So, instead let's plot a histogram of impact forces.
# Make a histogram plot; bins kwarg gives the number of bars
_ = plt.hist(df.impf, bins=20, normed=False)
plt.xlabel('impact force (mN)')
plt.ylabel('number of observations')
This is a better way to look at the impact force measurements. We see that there are a few high-force impacts, but that most of the impacts are about 500 mN or so.
This is still only part of the story. We would like to know how the impacts vary from frog to frog. First, let's see how many trials we have for each frog.
# This is a fancy way to do string formatting; unnecessary, but nice
print("""
Frog ID Number of samples
======= =================
I {0:d}
II {1:d}
III {2:d}
IV {3:d}
""".format(df.ID[df.ID=='I'].count(), df.ID[df.ID=='II'].count(),
df.ID[df.ID=='III'].count(), df.ID[df.ID=='IV'].count()))
So, we only have 20 samples for each frog. That's a bit few to construct a meaningful histogram for each frog. So, maybe we can make a bar graph showing the mean and standard deviation of each sample.
# Compute the mean impacts
mean_impacts = df.groupby('ID').impf.mean()
# We will use standard deviation as error bar
# Default DataFrame std is normalized by N-1; more on that in lecture
std_impacts = df.groupby('ID').impf.std()
# Bar locations
x = np.arange(4)
# Bar widths
bar_width = 0.5
# How to label each bar
bar_labels = df.ID.unique()
# We use matplotlib's bar function to make plot
plt.bar(x, mean_impacts, yerr=std_impacts, width=bar_width, align='center',
error_kw={'ecolor' : 'black'})
plt.grid(axis='x')
plt.xticks(x, bar_labels)
plt.ylabel('impact force (mN)')
Note that we used the groupby()
method to generate our means and standard deviations for the bar graph. This is a very useful method, especially when your data are tidy. Calling df.groupby(col)
groups the DataFrame
into what are effectively new DataFrames
in which all of the entries for col
are the same. This allows us to compute the means and standard deviations for each frog.
I had you do the plot that way because I want you to get comfortable with matplotlib
, which is really diverse in its uses. However, you can make essentially the same figure in one or two lines using some of the awesomeness of DataFrame
s.
# Compute standard deviations grouped by ID
yerr = df.groupby('ID').impf.std()
# Plot the bar graphs
ax = df.groupby('ID').impf.mean().plot(kind='bar', yerr=yerr)
ax.grid(axis='x')
ax.set_ylabel('impact force (mN)')
Actually, I only showed this result to show you the groupby()
and plot()
method of DataFrame
s. Actually, it is my opinion that....
There. I said it. Despite their prevalence throughout biological literature, I generally think bar charts are seldom a good way to show results. They are difficult to read, and often show little information. A better alternative is a box plot. Better still are jitter or beeswarm plots, which I wil show below.
There are many formats for box plots, and Seaborn employs what seems to be from my experience the most common. (Matplotlib boxplots are by default very ugly, which is why we'll directly use Seaborn to make the plots.) Note that the DataFrame
must be tidy for Seaborn to make the box plot.
First, we'll make a plot, and then I'll describe what it means.
# Use Seaborn to make box plot
ax = sns.boxplot(x='ID', y='impf', data=df, width=0.5)
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)')
The middle line in a box gives the median of the measurements. The bottom and top of the box are the 25th and 75th percentile values, called the first and third quartile, respectively. The extrema on the whiskers represent the minimum and maximum values of the data, outliers excluded. Outliers, in this case, are denoted with pluses. How are outliers defined? Typically, for box plots, an outliers is defined as follows. The total length of the box is called the interquartile range (IQR), going from the first to the third quartile (Q1 and Q3, respectively). I.e., $\text{IQR} = \text{Q3} − \text{Q1}$. A datum is an outlier if its value $x$ satisfies
\begin{align} &x < \text{Q1} −\frac{3}{2}\,\text{IQR},\\ \text{or } &x > \text{Q3}+\frac{3}{2}\,\text{IQR}. \end{align}The factor of $3/2$ is sometimes changed to 3 to denote extreme outliers, and this can be adjusted with the whis
keyword argument to sns.boxplot()
. Any points outside of the range of the whiskers are plotted explicitly and considered to be outliers.
We only have 20 measurements for each frog. Wouldn't it be better just to plot all 20 instead of trying to distill it down to a mean and a standard deviation in the bar graph or quartiles in a box plot? After all, the impact force measurements might not be Gaussian distributed; they may be bimodal or something else. So, we would like to generate a column scatter plot. We will plot each data point for each frog in a column, and "jitter" its position along the $x$-axis. We make the points somewhat transparent to allow visualization of overlap. This is accomplished using Seaborn's stripplot()
function.
# Make the jitter plot
ax = sns.stripplot(x='ID', y='impf', data=df, jitter=True, alpha=0.6)
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)')
Very nice! Now we can see that frog I, an adult, strikes with a wide range of impact forces, and can strike really hard. Frog II, also an adult, tends to strike at around 500 mN, but occassionally will strike harder. Juvenile frog III is a pretty consistent striker, while frog IV can have some pretty weak strikes.
The column scatter plot is not difficult to look at. The informational content of the data does not need to be distilled into a box and whisker and certainly not a bar with a standard deviation. For large data sets, in my experience when the number of data points are above about 200, it is useful to also include the box plot with the jitter plot. This is also easily done with Seaborn.
# Use Seaborn to make box plot
ax = sns.boxplot(x='ID', y='impf', data=df, width=0.5)
# Make the jitter plot
ax = sns.stripplot(x='ID', y='impf', data=df, jitter=True, marker='o',
alpha=0.8, edgecolor='white')
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)')
plt.savefig('test.pdf')
Plots like jitter plots in which the dots to not overlap are called beeswarm plots. For large numbers of data points, these are not favored over the jitter, but for small numbers of data points (less than 200 is a good rule of thumb), beeswarm plots tend to be easier to read, as each data point can be clearly seen. Unfortunately, matplotlib
and seaborn
do not natively have a clean way of doing this. It is definitely possible, but is cumbersome.
Fortunately, there is a package to do it, pybeeswarm, written by Melissa Gyrmek. You should have installed it when you did Tutorial 0a. We have to build a list of date points for each category by hand, since pybeeswarm
does not use the groupby()
functionality of DataFrame
s under the hood.
# Separate impact forces by frog
list_of_impfs = [df.impf[df.ID=='I'], df.impf[df.ID=='II'],
df.impf[df.ID=='III'], df.impf[df.ID=='IV']]
# Generate a beeswarm plot
_ = bs.beeswarm(list_of_impfs, labels=['I', 'II', 'III', 'IV'])
plt.grid(axis='x')
plt.ylabel('impact force (mN)')
We can get even more information. We might be interested to see if the variability in impact force is day-to-day or time-independent. So, we would like to make the beeswarm plot with different colors on different days. This requires a bit more code, but it is not too bad. (As is the case in many of the examples in the tutorials, there is likely a more concise way to do this.)
# Get list of dates; the unique method from a DataFrame works well
dates = df.date.unique()
# Number of unique dates
n_dates = len(dates)
# Assign colors to date names, calling colormap with value between 0 and 1
# We will use the qualitative colormap "Set1."
colors = []
for i in range(n_dates):
colors.append(plt.cm.Set1(float(i) / float(n_dates)))
# Make a dictionary of dates and colors
color_dict = dict(zip(dates, colors))
# Sort the DataFrame by ID to make sure color labels are in correct order
# in the beeswarm plot
df_sorted = df.sort('ID')
# Make a list of colors
colors = []
for i in range(len(df)):
colors.append(color_dict[df_sorted.date.iloc[i]])
# Make beeswarm with date coloring
bs_plot, ax = bs.beeswarm(list_of_impfs, labels=['I', 'II', 'III', 'IV'],
col=colors)
plt.grid(axis='x')
plt.ylabel('impact force (mN)')
We do see some clustering of colors, so there may be some correlation between the day of the measurement and the impact force of the frog's tongue. I am careful to say may be some correlation, since we have not carefully quantified it.
Now let's look at correlations. We may be curious: does a hard strike result in better adhesion? We can plot the adhesive force versus the impact force.
# Rename the column for convenience in future use
df.rename(columns={'adhesive force (mN)' : 'adhf'}, inplace=True)
# Plot adhesive force vs. impact force
plt.plot(df.impf, df.adhf, 'o', alpha=0.6)
plt.xlabel('impact force (mN)')
plt.ylabel('adhesive force (mN)')
Later in the course, we will learn how to do regressions to test the relationship between two variables.
In this tutorial, we have learned how to load data from CSV files into Pandas DataFrame
s. DataFrame
s are useful objects for looking at data from many angles. Together with matplotlib
, you can start to look at qualitative features of your data and think about how you will go about doing your detailed analysis.