(c) 2016 Justin Bois. This work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This tutorial was generated from an Jupyter notebook. You can download the notebook here.
import numpy as np
import pandas as pd
# Import pyplot for plotting
import matplotlib.pyplot as plt
# Some pretty Seaborn settings
import seaborn as sns
rc={'lines.linewidth': 2, 'axes.labelsize': 14, 'axes.titlesize': 14}
sns.set(rc=rc)
# Make Matplotlib plots appear inline
%matplotlib inline
In this tutorial, we will learn how to load data stored on disk into a Python data structure. We will use pandas
to read in CSV (comma separated value) files and store the results in the very hand Pandas DataFrame
. We will then "play" with the data to get a feel for it. This process is called exploratory data analysis, a term coined by John Tukey, and is an important first step in analysis of a data set.
The data set we will use comes from a fun paper about the adhesive properties of frog tongues. The reference is Kleinteich and Gorb, Tongue adhesion in the horned frog Ceratophrys sp., Sci. Rep., 4, 5225, 2014. You can download the paper here. You might also want to check out a New York Times feature on the paper here.
In this paper, the authors investigated various properties of the adhesive characteristics of the tongues of horned frogs when they strike prey. The authors had a striking pad connected to a cantilever to measure forces. They also used high speed cameras to capture the strike and record relevant data.
As I mentioned in the last tutorial, we need to import modules we need for data analysis. An important addition is Pandas, which we import as pd
. As is good practice, I import everything at the top of the document.
The data from the paper are contained in the file frog_tongue_adhesion.csv
, which you can download here. We can look at its contents. You can use
head -n 20 frog_tongue_adhesion.csv
from the command line. To invoke a command line command in a Jupyter notebook, precede it with an exclamation point.
!head -20 data/frog_tongue_adhesion.csv
The first lines all begin with #
signs, signifying that they are comments and not data. They do give important information, though, such as the meaning of the ID data. The ID refers to which specific frog was tested.
Immediately after the comments, we have a row of comma-separated headers. This row sets the number of columns in this data set and labels the meaning of the columns. So, we see that the first column is the date of the experiment, the second column is the ID of the frog, the third is the trial number, and so on.
After this row, each row represents a single experiment where the frog struck the target.
CSV files are generally a good way to store data. Commas make better delimiters than white space (such as tabs) because they have no portability issues. Delimiter collision is avoided by putting the data fields in double quotes when necessary. There are other good ways to store data, such as JSON, but we will almost exclusively use CSV files in this class.
We will use pd.read_csv()
to load the data set. The data are stored in a DataFrame, which is one of the data types that makes pandas
so convenient for use in data analysis. DataFrame
s offer mixed data types, including incomplete columns, and convenient slicing, among many, many other convenient features. We will use the DataFrame
to look at the data, at the same time demonstrating some of the power of DataFrame
s. They are like spreadsheets, only a lot better.
We will now load the DataFrame
.
# Use pd.read_csv() to read in the data and store in a DataFrame
df = pd.read_csv('data/frog_tongue_adhesion.csv', comment='#')
Notice that we used the kwarg comment
to specify that lines that begin with #
are comments and are to be ignored. If you check out the doc string for pd.read_csv()
, you will see there are lots of options for reading in the data.
Let's jump right in and look at the contents of the DataFrame
. We can look at the first several rows using the df.head()
method.
# Look at the contents (first 5 rows)
df.head()
We see that the column headings were automatically assigned. pandas
also automatically defined the indices (names of the rows) as integers going up from zero. We could have defined the indices to be any of the columns of data.
To access a column of data, we use the following syntax.
# Slicing a column out of a DataFrame is achieved by using the column name
df['impact force (mN)']
The indexing of the rows is preserved, and we can see that we can easily extract all of the impact forces. Note, though, that pd.read_csv()
interpreted the data to be integer (dtype = int64
), so we may want to convert these to floats.
# Use df.astype() method to convert it to a NumPy float64 data type.
df['impact force (mN)'] = df['impact force (mN)'].astype(np.float64)
# Check that it worked
print('dtype = ', df['impact force (mN)'].dtype)
Now let's say we only want the impact force of strikes above one Newton. Pandas DataFrame
s can conveniently be sliced with Booleans.
# Generate True/False array of rows for indexing
inds = df['impact force (mN)'] > 1000.0
# Slice out rows we want (big force rows)
df_big_force = df.loc[inds, :]
# Let's look at it; there will be only a few high-force values
df_big_force
Here, we used the .loc
feature of a DataFrame
. It allows us to index by row and column. We chose all columns (using a colon, :
), and put an array of Booleans for the rows. We get back a DataFrame
containing the rows associated with True
in the arrays of Booleans.
So now we only have the strikes of high force. Note, though, that the original indexing of rows was retained! In our new DataFrame
with only the big force strikes, there is no index 4, for example.
# Executing the below will result in an exception
df_big_force.loc[4, 'impact force (mN)']
This might seem counterintuitive, but it is actually a good idea. Remember, indices do not have to be integers!
There is a way around this, though. We can use the iloc
attribute of a DataFrame
. This gives indexing with sequential integers. This is generally not advised, and I debated even showing this.
# Using iloc enables indexing by the corresponding sequence of integers
df_big_force['impact force (mN)'].iloc[4]
In our DataFrame
, the data are tidy. The concept of tidy data originally comes from development of databases, but has taken hold for more general data processing in recent years. See this paper by Hadley Wickham for a great discussion of this. Tidy data refers to data sets arranged in tabular form that have the following format.
As we will see, structuring the data in this way allows for very easy access. Furthermore, if you know a priori that a data set it tidy, there is little need for munging; you already know how to pull out what you need.
We will work on tidying messy data ("messy data" is anything that is not tidy) next week. For now, we will take advantage of the fact that Kleinteich and Korb generously provided us with tidy data.
Given that the data are tidy and one row represents a single experiment, we might like to extract a row and see all of the measured quantities from a given strike. This is also very easy with DataFrame
s using their loc
attribute.
# Let's slice out experiment with index 42
exp_42 = df.loc[42, :]
# And now let's look at it
exp_42
Notice how clever the DataFrame
is. We sliced a row out, and now the indices describing its elements are the column headings. This is a Pandas Series
object.
Now, slicing out index 42
is not very meaningful because the indices are arbitrary. Instead, we might look at our lab notebook, and want to look at trial number 3 on May 27, 2013, of frog III. This is a more common, and indeed meaningful, use case.
# Set up Boolean slicing
date = df['date'] == '2013_05_27'
trial = df['trial number'] == 3
ID = df['ID'] == 'III'
# Slide out the row
df.loc[date & trial & ID, :]
The difference here is that the returned slice is a DataFrame
and not a Series
. We can easily make it a Series
using iloc
. Note that in most use cases, this is just a matter of convenience for viewing and nothing more.
df.loc[date & trial & ID, :].iloc[0]
You may take issue with the rather lengthy syntax of access column names. I.e., if you were trying to access the ratio of the contact area with mucus to the contact area without mucus for trial number 3 on May 27, 2013, you would do the following.
# Set up criteria for our seach of the DataFrame
date = df['date'] == '2013_05_27'
trial = df['trial number'] == 3
ID = df['ID'] == 'III'
# When indexing DataFrames, use & for Boolean and (and | for or; ~ for not)
df.loc[date & trial & ID, 'contact area with mucus / contact area without mucus']
That syntax can be annoying. But many would argue that this is preferred because there is no ambiguity about what you are asking for. However, you may want to use shorter names. Conveniently, when your column names do not have spaces, you can use attribute access. For example....
# Attribute access
df.date[:10]
Attribute access is really for convenience and is not the default, but if can make writing the code much less cumbersome. So, let's change the name of the columns as
'trial number' → 'trial'
'contact area with mucus / contact area without mucus' → 'ca_ratio'
DataFrame
s have a nice rename
method to do this.
# Make a dictionary to rename columns
rename_dict = {'trial number' : 'trial',
'contact area with mucus / contact area without mucus' : 'ca_ratio'}
# Rename the columns
df = df.rename(columns=rename_dict)
# Try out our new column name
df.ca_ratio[42]
Notice that we have introduced a new Python data structure, the dictionary. A dictionary is a collection of objects, each one with a key to use for indexing. For example, in rename_dict
, we could get what we wanted to rename 'trial number'
.
# Indexing of dictionaries looks syntactially similar to cols in DataFrames
rename_dict['trial number']
We can go on and on about indexing Pandas DataFrame
s, because there are many ways to do it. For much more on all of the clever ways you can access data and subsets thereof in DataFrames, see the pandas
docs on indexing.
However, if our data are tidy, there is really no need to go beyond what we have done here. You really only need to generate Boolean querying arrays for tidy data, and you get a tidy data structure out that has exactly what you want. So, we will not really go any more into indexing in the course, except where needed to make data tidy.
"Exploratory data analysis" is the time during data analysis where you explore your data. You look at numbers, plot various quantities, and think about what analysis techniques you would like to use. pandas
DataFrames help you do this.
As we go through the interactive analysis, we will learn about syntax for various matplotlib
plot styles.
The first thing we'll do is look at strike forces. We'll rename the column for impact force for convenient attribute access because we'll access it many times.
df = df.rename(columns={'impact force (mN)': 'impf'})
Now, let's start plotting the impact forces to see what we're dealing with. We'll start with the most naive plot.
# Just make a scatter plot of forces
plt.plot(df.impf, marker='.', linestyle='none')
plt.margins(0.02)
plt.xlabel('order in DataFrame')
plt.ylabel('impact force (mN)');
The $x$-axis is pretty meaningless. So, instead let's plot a histogram of impact forces.
# Make a histogram plot; bins kwarg gives the number of bars
_ = plt.hist(df.impf, bins=20, normed=False)
plt.xlabel('impact force (mN)')
plt.ylabel('number of observations');
This is a better way to look at the impact force measurements. We see that there are a few high-force impacts, but that most of the impacts are about 500 mN or so.
This is still only part of the story. We would like to know how the impacts vary from frog to frog. First, let's see how many trials we have for each frog.
# This is a fancy way to do string formatting; unnecessary, but nice
print("""
Frog ID Number of samples
======= =================
I {0:d}
II {1:d}
III {2:d}
IV {3:d}
""".format(df.ID[df.ID=='I'].count(), df.ID[df.ID=='II'].count(),
df.ID[df.ID=='III'].count(), df.ID[df.ID=='IV'].count()))
So, we only have 20 samples for each frog. That's a bit few to construct a meaningful histogram for each frog. So, maybe we can make a bar graph showing the mean and standard deviation of each sample. This is very easy using Seaborn when the data are organized into a tidy DataFrame
. We use the sns.barplot()
function and supply the DataFrame
containing the data using the data
kwarg, the column corresponding to the $x$-axis using the x
kwarg, and the column corresponding to the bar height as the y
kwarg.
# Use Seaborn to quickly make bar graph
sns.barplot(data=df, x='ID', y='impf')
plt.xlabel('Frog')
plt.ylabel('impact force (mN)');
There. I said it. Despite their prevalence throughout biological literature, I generally think bar charts are seldom a good way to show results. They are difficult to read, and often show little information. A better alternative is a box plot. Better still are jitter or beeswarm plots, which I will show below. All are easily constructed using Seaborn.
There are many formats for box plots, and Seaborn employs what seems to be from my experience the most common. (Matplotlib boxplots are by default very ugly, which is why we'll directly use Seaborn to make the plots.) Note that the DataFrame
must be tidy for Seaborn to make the box plot.
First, we'll make a plot, and then I'll describe what it means.
# Use Seaborn to make box plot
ax = sns.boxplot(x='ID', y='impf', data=df, width=0.5)
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)');
The middle line in a box gives the median of the measurements. The bottom and top of the box are the 25th and 75th percentile values, called the first and third quartile, respectively. The extrema on the whiskers represent the minimum and maximum values of the data, or 1.5 times the IQR (see below). Outliers, in this case, are denoted with pluses.
How are outliers defined? Typically, for box plots, an outliers is defined as follows. The total length of the box is called the interquartile range (IQR), going from the first to the third quartile (Q1 and Q3, respectively). I.e., $\text{IQR} = \text{Q3} − \text{Q1}$. A datum is an outlier if its value $x$ satisfies
\begin{align} &x < \text{Q1} −\frac{3}{2}\,\text{IQR},\\ \text{or } &x > \text{Q3}+\frac{3}{2}\,\text{IQR}. \end{align}The factor of $3/2$ is sometimes changed to 3 to denote extreme outliers, and this can be adjusted with the whis
keyword argument to sns.boxplot()
. Any points outside of the range of the whiskers are plotted explicitly and considered to be outliers.
We only have 20 measurements for each frog. Wouldn't it be better just to plot all 20 instead of trying to distill it down to a mean and a standard deviation in the bar graph or quartiles in a box plot? After all, the impact force measurements might not be Gaussian distributed; they may be bimodal or something else. So, we would like to generate a column scatter plot. We will plot each data point for each frog in a column, and "jitter" its position along the $x$-axis. We make the points somewhat transparent to allow visualization of overlap. This is accomplished using Seaborn's stripplot()
function.
# Make the jitter plot
ax = sns.stripplot(x='ID', y='impf', data=df, jitter=True, alpha=0.6)
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)');
Very nice! Now we can see that frog I, an adult, strikes with a wide range of impact forces, and can strike really hard. Frog II, also an adult, tends to strike at around 500 mN, but occasionally will strike harder. Juvenile frog III is a pretty consistent striker, while frog IV can have some pretty weak strikes.
The column scatter plot is not difficult to look at. The informational content of the data does not need to be distilled into a box and whisker and certainly not a bar with a standard deviation. For large data sets, in my experience when the number of data points are above about 200, it is useful to also include the box plot with the jitter plot. This is also easily done with Seaborn.
# Use Seaborn to make box plot
ax = sns.boxplot(x='ID', y='impf', data=df, width=0.5)
# Make the jitter plot
ax = sns.stripplot(x='ID', y='impf', data=df, jitter=True, marker='o',
alpha=0.8, color=(0.7, 0.7, 0.7))
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)');
Plots like jitter plots in which the dots to not overlap are called beeswarm plots. For large numbers of data points, these are not favored over the jitter, but for small numbers of data points (less than 200 is a good rule of thumb), beeswarm plots tend to be easier to read, as each data point can be clearly seen. We can make a beeswarm plot using Seaborn's swarmplot()
function/
# Make the jitter plot
ax = sns.swarmplot(x='ID', y='impf', data=df)
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)');
We can get even more information. We might be interested to see if the variability in impact force is day-to-day or time-independent. So, we would like to make the beeswarm plot with different colors on different days. We can use the hue
kwarg of the sns.swarmplot()
function to color the lots by the day they were measured.
# Make the jitter plot
ax = sns.swarmplot(x='ID', y='impf', data=df, hue='date')
# Remove legend because it gets in the way
ax.legend_.remove()
# Relabel axes
ax.set_xlabel('Frog')
ax.set_ylabel('Impact force (mN)');
We do see that frogs I and II were measured on different days, and III and IV were measured on the same days. There also does not seem to be any correlation between impact force and the day the frog was measured.
# Rename the column for convenience in future use
df = df.rename(columns={'adhesive force (mN)' : 'adhf'})
# Plot adhesive force vs. impact force
plt.plot(df.impf, df.adhf, marker='.', linestyle='none', alpha=0.6)
plt.xlabel('impact force (mN)')
plt.ylabel('adhesive force (mN)');
Later in the course, we will learn how to do regressions to test the relationship between two variables.
In this tutorial, we have learned how to load data from CSV files into Pandas DataFrame
s. DataFrame
s are useful objects for looking at data from many angles. Together with matplotlib
, you can start to look at qualitative features of your data and think about how you will go about doing your detailed analysis.