This tutorial was generated from an IPython notebook. You can download the notebook here.
In this tutorial, we will further develop our skills of measuring motion by learning about particle image velocimetry. We will not go through an entire calculation, but will instead demonstrate the principle.
Before we begin, we import our favorite modules, as always.
# As usual, import modules
from __future__ import division, absolute_import, \
print_function, unicode_literals
import numpy as np
import scipy.optimize
import matplotlib.pyplot as plt
from matplotlib import cm
# Utilities from JB
import jb_utils as jb
# Necessary to display plots in this IPython notebook
%matplotlib inline
The data sets consist of time lapse images of autofluorescent yolk granules in Drosophila oocytes and verious stages of development. The cytoplasm of the oocytes undergoes cytoplasmic streaming, which results in movements of the yolk granules. By imaging the yolk granules, we can visualize flow of the cytoplasm.
The details:
2013-07-01_stage9.ome.tiff
: Stage 9 oocyte, 0.31 µm/pixel, 10 sec/frame.2012-03-08_stage11.ome.tiff
: Stage 11 oocyte, 0.43 µm/pixel, 5 sec/frame.2012-08-30_stage12.ome.tiff
: Stage 12 oocyte, 0.43 µm/pixel, 5 sec/frame.Let's load in the data sets and take a look.
# Stage 9
fname = '../data/dros_streaming/2013-07-01_stage9.ome.tiff'
physical_size = 0.31
dt = 10.0
s9_xyt = jb.XYTStack(fname=fname, conserve_memory=False, dt=dt,
physical_size_x=physical_size,
physical_size_y=physical_size)
# Stage 11
fname = '../data/dros_streaming/2012-03-08_stage11.ome.tiff'
physical_size = 0.43
dt = 5.0
s11_xyt = jb.XYTStack(fname=fname, conserve_memory=False, dt=dt,
physical_size_x=physical_size,
physical_size_y=physical_size)
# Stage 12
fname = '../data/dros_streaming/2012-08-30_stage12.ome.tiff'
physical_size = 0.43
dt = 5.0
s12_xyt = jb.XYTStack(fname=fname, conserve_memory=False, dt=dt,
physical_size_x=physical_size,
physical_size_y=physical_size)
Before we quantify the motion, we'll discuss ways of visualizing motion. Of course, we can look at the respective movies using the show_movie
method of an XYTStack
. To visualize flow in a static image, we can use a maximum intensity projection. This means that a single image is generated from the image stack where each pixel value in the that image is given by the maximum value encountered at that position for all images in the stack. Let's write a function to do that.
def max_intensity_proj(xyt, start_frame=0, end_frame=None):
"""
Does a maximum intensity projection of an XYTStack.
"""
# Determine end frame if necessary
if end_frame is None:
end_frame = xyt.size_t - 1
max_proj = np.zeros_like(xyt.im(start_frame))
for i in range(start_frame, end_frame+1):
im = xyt.im(i)
inds = np.greater(im, max_proj)
max_proj[inds] = im[inds]
return max_proj
With our function in hand, we can compute the maximum intensity projections and look at them. We should do our projections over the same time interval to make comparisons.
# Compute maximum projections
s9_max_proj = max_intensity_proj(s9_xyt, end_frame=29)
s11_max_proj = max_intensity_proj(s11_xyt)
s12_max_proj = max_intensity_proj(s12_xyt)
# Show max intensity projections
fig, ax = plt.subplots(2, 2, figsize=(8, 8.5))
ax[0,0].imshow(s9_max_proj, cmap=cm.gray)
ax[0,0].set_title('Stage 9')
ax[0,1].imshow(s11_max_proj, cmap=cm.gray)
ax[0,1].set_title('Stage 11')
ax[1,0].imshow(s12_max_proj, cmap=cm.gray)
ax[1,0].set_title('Stage 12')
# Dummy image in fourth axis
ax[1,1].imshow(np.zeros_like(s12_max_proj), cmap=cm.gray)
We haven't put in scale bars, but we know that the scale of stage 9 is smaller than stage 11 and stage 12 (which are equal). We can therefore immediately see that streaming is fastest in stage 12, a bit slower in stage 11, and much slower in stage 9. But, we would like to quantify velocities. This is where PIV becomes very useful!
The main idea behind PIV is to take a region of one image and compute the cross correlation with another image. We then find the point of maximal correlation, which in turn tells us the direction and magnitude of the motion.
Of course, we need to understand what cross correlation is? We define the cross-correlation $R(x,y)$ between images $I_1$ and $I_2$ as
\begin{align} R(x,y) = \sum_i \sum_j I_1(i,j) I_2(i+x, j+y). \end{align}
We could compute this but the correlation theorem, which says that
\begin{align} \hat{R}(x,y) = \hat{I}_1 \cdot \hat{I}_2^*, \end{align}
where the hat denotes Fourier transform and the $*$ denotes complex conjugate. Thus, we can directly compute the cross correlation just by computing Fourier transforms.
We'll start simply and look at a single region in a frame and compare it to the a later frame.
# Get images
im_1 = s11_xyt.im(0)[350:414, 275:339]
im_2 = s11_xyt.im(2)[350:414, 275:339]
# Look at them
fig, ax = plt.subplots(1, 2, figsize=(8, 5))
ax[0].imshow(im_1, cmap=cm.gray)
ax[1].imshow(im_2, cmap=cm.gray)
This might be easier seen if we plot them with false coloring.
# Scale images from 0 to 1 for visualization
im_1_show = (im_1 - im_1.min()) / (im_1.max() - im_1.min())
im_2_show = (im_2 - im_2.min()) / (im_2.max() - im_2.min())
# Get images
color_im = np.dstack((im_1_show, np.zeros_like(im_1_show), im_2_show))
# Look at them
plt.imshow(color_im)
We now clearly see that there is movement to the lower right. We can look at the correlation between the two.
# Compute correlation using spectral theorem
ccoeff = np.fft.fftshift((np.fft.ifft2(np.conj(np.fft.fft2(im_1))
* np.fft.fft2(im_2))).real);
# Normalize and find maximum
if not (ccoeff == 0.0).all():
ccoeff /= ccoeff.max() # Normalize
max_ind = np.nonzero(ccoeff == ccoeff.max())
max_loc = (max_ind[0][0], max_ind[1][0])
# Show the correlation
plt.imshow(ccoeff, cmap=cm.RdBu_r)
plt.plot(im_1.shape[1] / 2, im_1.shape[0] / 2, '.', color=(0.0, 0.0, 1.0))
plt.plot(max_loc[1], max_loc[0], '.', color=(1.0, 1.0, 0.0))
plt.margins(0, 0)
Of course, we want subpixel accuracy, so we can fit the cross-correlation function with a Gaussian in exactly the same way as when we were tracking beads. First, we'll redefine the Gaussian fitter from before.
# Fit symmetric Gaussian to x, y, z data
def fit_gaussian(x, y, z):
"""
Fits symmetric Gaussian to x, y, z.
Fit func: z = a * exp(-((x - x_0)**2 + (y - y_0)**2) / (2 * sigma**2))
Returns: p = [a, x_0, y_0, sigma]
"""
def sym_gaussian(p):
"""
Returns a Gaussian function:
a**2 * exp(-((x - x_0)**2 + (y - y_0)**2) / (2 * sigma**2))
p = [a, x_0, y_0, sigma]
"""
a, x_0, y_0, sigma = p
return a**2 \
* np.exp(-((x - x_0)**2 + (y - y_0)**2) / (2.0 * sigma**2))
def sym_gaussian_resids(p):
"""Residuals to be sent into leastsq"""
return z - sym_gaussian(p)
def guess_fit_gaussian():
"""
return a, x_0, y_0, and sigma based on computing moments of data
"""
a = z.max()
# Compute moments
total = z.sum()
x_0 = np.dot(x, z) / total
y_0 = np.dot(y, z) / total
# Approximate sigmas
sigma_x = np.dot(x**2, z) / total
sigma_y = np.dot(y**2, z) / total
sigma = np.sqrt(sigma_x * sigma_y)
# Return guess
return (a, x_0, y_0, sigma)
# Get guess
p0 = guess_fit_gaussian()
# Perform optimization using nonlinear least squares
popt, junk_output, info_dict, mesg, ier = \
scipy.optimize.leastsq(sym_gaussian_resids, p0, full_output=True)
# Check to make sure leastsq was successful. If not, return centroid
# estimate.
if ier in (1, 2, 3, 4):
return (popt[0]**2, popt[1], popt[2], popt[3])
else:
return p0
Now, we get subpixel resolution on the center of the cross-correlation. We'll use a 4$\times$4 neighborhood.
# Define neighborhood +- width
neigh_width = 2
# Pull out neighborhood and reshape to send to fit_gaussian
corr = ccoeff[max_loc[0]-neigh_width:max_loc[0]+neigh_width+1,
max_loc[1]-neigh_width:max_loc[1]+neigh_width+1]
y, x = np.indices(corr.shape)
# Perform the curve fit
popt = fit_gaussian(x.ravel(), y.ravel(), corr.ravel())
# Pull out parameters
a, x_sub, y_sub, sigma = popt
# Compute total displacement in units of pixels
dx_pix = max_loc[1] + x_sub - neigh_width - im_1.shape[1] / 2
dy_pix = max_loc[0] + y_sub - neigh_width - im_1.shape[0] / 2
# Report results
print("""
dx_pix = {0:.5f}
dy_pix = {1:.5f}
""".format(dx_pix, dy_pix))
So, we see movement to the lower right, as we would expect. We can compute an actual velocity. We went two frames, so this is 10 seconds. The interpixel spacing is 0.43 µm/pixel.
# Compute components of velocity vector
v_x = dx_pix * s11_xyt.physical_size_x / (2 * s11_xyt.dt)
v_y = dy_pix * s11_xyt.physical_size_x / (2 * s11_xyt.dt)
# Print results
print("""
v_x = {0:d} nm/s
v_y = {1:d} nm/s
speed = {2:d} nm/s
""".format(int(np.round(v_x * 1000)), int(np.round(v_y * 1000)),
int(np.round(np.sqrt(v_x**2 + v_y**2) * 1000))))
When doing PIV, we do this again and again over various interrogation windows throughout the image to get the velocity field over time. There are lots of subtleties to consider, and this is too detailed to do in this tutorial. Nonetheless, we have shown how powerful cross correlation can be for quantifying motion in images.