Packages and modules


[1]:
# Colab setup ------------------
import os, sys, subprocess
if "google.colab" in sys.modules:
    cmd = "pip install --upgrade watermark"
    process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
# ------------------------------

Note: In this lesson, you will be learning about how to write your own packages. While you will not necessarily need to do this in the course, it is an important skill to have. Because you will be writing .py files, this lesson is not at directly usable on Google Colab, so you should do it on your own machine, if possible.

The Python Standard Library has lots of built-in modules that contain useful functions and data types for doing specific tasks. You can also use modules from outside the standard library. And you will undoubtedly write your own modules!

A module is contained in a file that ends with .py. This file can have classes, functions, and other objects. We will not discuss defining your own classes in this class, so your modules will essentially just contain functions.

A package contains several related modules that are all grouped together under one name. We will extensively use the NumPy, SciPy, Pandas, and Bokeh packages, among others, and I’m sure you will also use them beyond. As such, the first module we will consider is NumPy.

Example: I want to compute the mean and median of a list of numbers

Say I have a list of numbers and I want to compute the mean. This happens all the time; you repeat a measurement multiple times and you want to compute the mean. We could write a function to do this.

[1]:
def mean(values):
    """Compute the mean of a sequence of numbers."""
    return sum(values) / len(values)

And it works as expected.

[2]:
print(mean([1, 2, 3, 4, 5]))
print(mean((4.5, 1.2, -1.6, 9.0)))
3.0
3.275

In addition to the mean, we might also want to compute the median, the standard deviation, etc. These seem like really common tasks. Remember my advice: if you want to do something that seems really common, a good programmer (or a team of them) probably already wrote something to do that. Means, medians, standard deviations, and lots and lots and lots of other numerical things are included in the Numpy package. To get access to it, we have to import it.

[3]:
import numpy

That’s it! We now have the numpy package available for use. Remember, in Python everything is an object, so if we want to access the methods and attributes, available in the numpy module, we use dot syntax. In a Jupyter notebook or in the JupyterLab console, you can type

numpy.

(note the dot) and hit tab, and we will see what is available. For Numpy, there is a huge number of options!

So, let’s try to use Numpy’s numpy.mean() function to compute a mean.

[4]:
print(numpy.mean([1, 2, 3, 4, 5]))
print(numpy.mean((4.5, 1.2, -1.6, 9.0)))
3.0
3.275

Great! We get the same values! Now, we can use the numpy.median() function to compute the median.

[5]:
print(numpy.median([1, 2, 3, 4, 5]))
print(numpy.median((4.5, 1.2, -1.6, 9.0)))
3.0
2.85

This is nice. It gives the median, including when we have an even number of elements in the sequence of numbers, in which case it automatically interpolates. It is really important to know that it does this interpolation, since if you are not expecting it, it can give unexpected results. So, here is an important piece of advice:

Always check doc strings.

We can access the doc string of the numpy.median() function in JupyterLab by typing

numpy.median?

and looking at the output. An important part of that output:

Notes
-----
Given a vector ``V`` of length ``N``, the median of ``V`` is the
middle value of a sorted copy of ``V``, ``V_sorted`` - i
e., ``V_sorted[(N-1)/2]``, when ``N`` is odd, and the average of the
two middle values of ``V_sorted`` when ``N`` is even.

This is where the documentation tells you that the median will be reported as the average of two middle values when the number of elements is even. Note that you could also read the documentation here, which is a bit easier to read.

The as keyword

We use Numpy all the time. Typing numpy over and over again can get annoying. So, it is common practice to use the as keyword to import a module with an alias. Numpy’s alias is traditionally np, and this is the only alias you should use for Numpy.

[6]:
import numpy as np

np.median((4.5, 1.2, -1.6, 9.0))
[6]:
2.85

I prefer to do things this way, though some purists differ. We will use traditional aliases for major packages like Numpy, Pandas, and HoloViews.

Third party packages

Standard Python installations come with the standard library. Numpy and other useful packages are not in the standard library. Outside of the standard library, there are several packages available. Several. Ha! There are currently (June 12, 2019) about 180,000 packages available through the Python Package Index, PyPI. Usually, you can ask Google about what you are trying to do, and there is often a third party module to help you do it. The most useful (for scientific computing) and thoroughly tested packages and modules are available using conda. Others can be installed using pip, which we will use soon.

Writing your own module

To write your own module, you need to create a .py file and save it. You can do this using the text editor in JupyterLab. Let’s call our module na_utils, for “nucleic acid utilities.” So, we create a file called na_utils.py. To start off, we’ll just have two functions in the module (and we’d naturally add more later), dna_to_rna(), one which converts a DNA sequence to an RNA sequence (just changes T to U) and another which computes the GC content of a sequence. The contents of na_utils.py should look as follows.

"""
Convert DNA sequences to RNA.
"""

def dna_to_rna(seq):
    """Convert a DNA sequence to RNA."""

    # Determine if original sequence was uppercase
    seq_upper = seq.isupper()

    # Convert to lowercase
    seq = seq.lower()

    # Swap out 't' for 'u'
    seq = seq.replace('t', 'u')

    # Return upper or lower case RNA sequence
    if seq_upper:
        return seq.upper()
    else:
        return seq


def gc_content(seq):
    """Compute GC content of a sequence."""

    seq = seq.lower()
    return (seq.count('g') + seq.count('c')) / len(seq)

Note that the file starts with a doc string saying what the module contains.

I then have my two functions, each with doc strings. We will now import the module and then use these functions. In order for the import to work, the file na_utils.py must be in your present working directory, since this is where the Python interpreter will look for your module. In general, if you execute the code

import my_module

the Python interpreter will look first in the working directory to find my_module.py. (The cell below will not work on your machine unless you have a file called na_utils.py with the above contents in your working directory.)

[7]:
import na_utils

# Sequence
seq = 'GACGATCTAGGCGACCGACTGGCATCG'

# Convert to RNA
na_utils.dna_to_rna(seq)
[7]:
'GACGAUCUAGGCGACCGACUGGCAUCG'

Wonderful! You now have your own functioning module!

A quick note on error checking

These functions have minimal error checking of the input. For example, the rna() function will take gibberish in and give gibberish out.

[8]:
na_utils.dna_to_rna('You can observe a lot by just watching.')
[8]:
'you can observe a lou by jusu wauching.'

In general, checking input and handling errors is an essential part of writing functions, and we will cover that soon.

Importing modules in your .py files and notebooks

As our first foray into the glory of PEP 8, the Python style guide, we quote:

Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.

Imports should be grouped in the following order:

  1. standard library imports

  2. related third party imports

  3. local application/library specific imports

You should put a blank line between each group of imports.

You should follow this guide. I generally do it for Jupyter notebooks as well, with my first code cell having all of the imports I need. Therefore, going forward all of our lessons will have all necessary imports at the top of the document. The only exception is when we are explicitly demonstrating a concept that requires an import.

Once you have imported a module or package, the interpreter stores its contents in memory. You cannot update the contents of the package and expect the interpreter to know about the changes. You will need to restart the kernel and then import the package again in a fresh instance.

This can seem annoying, but it is good design. It ensures that code you are running does not change as you go through executing a notebook. However, when developing modules, it is sometimes convenient to have an imported module be updated as you run through the notebook as you are editing. To enable this, you can use the autoreload extension. To activate it, run the following in a code cell, being sure to include the % sign.

%load_ext autoreload
%autoreload 2

The % sign in IPython/JupyterLab means that you are going to use a magic function, which we will encounter from time to time.

Package management

When we wrote the na_utils module, we stored it in the directory that we were working in, called the working directory. But what if you write a module that you want to use regardless of what directory your are in? To allow this kind of usage, you can use the setuptools module of the standard library to manage your packages. You should read the documentation on Python packages and modules to understand the details of how this is done, but what we present here is sufficient to get simple packages running and installed.

In fact, if you are going to write any piece of code you will reuse from notebook to notebook and/or from application to application, you should make a proper package for it, following the specifications I’ll now lay out.

In order for the tools in setuptools to effectively install your modules for widespread use, you need to follow a specific architecture for your package. Let’s say I wanted to make a package called example that has some of the utilities dictionaries and functions we have encountered in the lessons so far.

The file structure is of the package is

/example
  /example
    __init__.py
    na_utils.py
    bioinfo_dicts.py
    ...
setup.py
README.md

The ellipsis above signifies that there are other files in there that we could add. I am trying to keep it simple for now to show how package management works. We are focusing on the package structure here, so the contents of na_utils.py and bioinfo_dicts.py are shown below in the appendix.

It is essential that the name of the root directory be the name of the package, and that there be a subdirectory with the same name. That subdirectory must contain a file __init__.py. This file contains information about the package and how the modules of the package are imported, but it may be empty for simple modules. In this case, I included a string with the name and version of the package, as well as instructions to import appropriate modules. Here are the contents of __init__.py. The first two lines of code tell the interpreter what to import when running import example.

"""Top-level package for utilities for nucleic acids."""

from .na_utils import *
from .bioinfo_dicts import *

__author__ = 'Justin Bois'
__email__ = 'bois@caltech.edu'
__version__ = '0.0.1'

Also within the subdirectory are the .py files containing the code of the package. In our case, we have, na_utils.py and bioinfo_dicts.py.

It is also good practice to have a README file (which I suggest you write in Markdown) that has information about the package and what it does. Since this little demo package is kind of trivial, the README is quite short. Here are the contents I made for README.md (shown in unrendered raw Markdown).

# example

Utilities for parsing strings of nucleic acid sequences.

Finally, in the main directory, we need to have a file called setup.py, which contains the instructions for setuptools to install the package. We use the setuptools.setup() function to do the installation.

import setuptools

with open("README.md", "r") as f:
    long_description = f.read()

setuptools.setup(
    name='example',
    version='0.0.1',
    author='Justin Bois',
    author_email='bois@caltech.edu',
    description='Utilities for parsing strings of nucleic acid sequences.',
    long_description=long_description,
    long_description_content_type='ext/markdown',
    packages=setuptools.find_packages(),
    classifiers=(
        "Programming Language :: Python :: 3",
        "Operating System :: OS Independent",
    ),
)

This is a minimal setup.py function, but will be sufficient for most packages you write for your own use. For your use, you make obvious changes to the name, author, etc., fields.

Once your basic package architecture is built, you can install it using pip, which is a self-referential acronym for Pip Installs Packages. To install a your package, make sure you are in the directory immediately above your package. If my package is in ~/git/example, I would want to cd ~/git. Then, do the following on the command line.

pip install -e example

The -e flag is important, which tells pip that this is a local, editable package. Your package is now accessible on your machine whenever you run the Python interpreter!

What you have just done is a common workflow with packages. You write your own packages (that are under version control, of course), and you make them available using pip install -e. In addition to your own packages, you used conda to install third party packages on your machine in lesson 0. Sometimes packages are not yet available via conda, but are nonetheless available in the Python Package Index (PyPI). There are over 260,000 packages in the PyPI. To install one of them, you simple use

pip install name_of_package

Note that the -e flag is missing. (More importantly, note that the -e flag is present when installing your own local package that is not (yet) in the PyPI.)

You can also update packages that are in the PyPI using the --upgrade flag.

pip install --upgrade name_of_package

Importantly, conda plays nicely with pip. If you install something with pip, conda will be aware of it.

Appendix

In the example package, the contents of bioinfo_dicts.py are:

"""
Useful bioinformatics dictionaries.
"""

aa = {
    "A": "Ala",
    "R": "Arg",
    "N": "Asn",
    "D": "Asp",
    "C": "Cys",
    "Q": "Gln",
    "E": "Glu",
    "G": "Gly",
    "H": "His",
    "I": "Ile",
    "L": "Leu",
    "K": "Lys",
    "M": "Met",
    "F": "Phe",
    "P": "Pro",
    "S": "Ser",
    "T": "Thr",
    "W": "Trp",
    "Y": "Tyr",
    "V": "Val",
}

# The set of DNA bases
bases = ["T", "C", "A", "G"]

# Build list of codons
codon_list = [
    first_base + second_base + third_base
    for first_base in bases
    for second_base in bases
    for third_base in bases
]

# The amino acids that are coded for (* = STOP codon)
amino_acids = "FFLLSSSSYY**CC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG"

# Build dictionary from tuple of 2-tuples (technically an iterator, but it works)
codons = dict(zip(codon_list, amino_acids))

del codon_list
del amino_acids
del bases

The contents of the file na_utils.py we shown earlier in this lesson.

Computing environment

[9]:
%load_ext watermark
%watermark -v -p numpy,jupyterlab
CPython 3.8.3
IPython 7.16.1

numpy 1.18.5
jupyterlab 2.1.5