Setup a Postgres + Python Docker Dev Stack

May 07, 2019

Query a Postgres DB from a Python Application

By the end of this blog post you will have all the tools you need to query a Postgres database, with some dummy data, from a python application, all without installing a single thing besides docker onto your computer! 

Learning Curve

If you read my Setup a MySQL + Python Docker Dev Stack, you'll know all about my how much trouble I had figuring out that pesky connection string. It was such a nitpicky little detail, but so crucial!

That single lesson, while very frustrating, has held through with every database I have ever connected to. It doesn't matter what kind of database you are connecting to. It could be mysql, postgres, mongodb, a redis cache. You just need to know the user, password, host, and default port! 

Onwards with an Example!

Docker-Compose networking MAGIC

When you spin up a docker-compose stack it kind of acts as if there are IT goblins living in your computer and creating hostnames and connections and whatnot. 

Let's say I am using the Standard Postgres Docker Image (as one should!) to serve up my database. The default port for Postgres is 5432, and the hostname is whatever name I give to the service. Once you know figuring out the connection strings is a breeze!

version: '3'
services:

  postgres_db:
    image: postgres:11.1
    environment:   # Set up postgres database name and password
      POSTGRES_PASSWORD: password
      POSTGRES_DB: bookstore
      POSTGRES_USER: root
    ports:    # Set up ports exposed for other containers to connect to
      - 5433:5432
    networks:
      - app-tier
    volumes:
      - ./postgres:/docker-entrypoint-initdb.d


networks:
  app-tier:
    driver: bridge
YAML

Now, I could spin this up as is putting the above docker-compose.yml file in a directory and running:

docker-compose up -d
docker-compose exec postgres_db bash
Bash

Notice that the service name, on line 4 of our docker-compose config is postgres_db, and that this matches both the name that we give it to the docker-compose exec command.

 

Postgres Docker Image Environmental Variables

You'll notice we set some environmental variables starting with POSTGRES. These are configuration variables that are read by the init process in the postgres docker container that are used to spin up your database and set the initial username and passwords. If your stack is anything but a dev environment you would want to add your passwords through secrets, or at least through a .env file.

Connect to the Database From A Python App

Here is where the fun happens. We can spin up a docker image for our python app by defining the packages we need in an environment.yml and building them. For more information on this process you can see my full tutorial at Develop a Python Flask App With Docker.

In particular example we need sqlalchemy and a postgres connection library. I like to throw ipython in there too, because I get kind of twitchy when I don't have it. Ipython has pretty much replaced bash for me at this point. If you prefer jupyterhub notebooks you may want to add that too.

version: '3'

# Run as
# docker-compose build; docker-compose up -d
# Check with
# docker ps
# Then check the logs with
# docker logs --tail 50 $container_id
# docker-compose images
# docker-compose logs --tail 20 repo_name

services:

  postgres_db:
    image: postgres:11.1
    environment:   # Set up postgres database name and password
      POSTGRES_PASSWORD: password
      POSTGRES_DB: bookstore
      POSTGRES_USER: root
    ports:    # Set up ports exposed for other containers to connect to
      - 5433:5432
    networks:
      - app-tier
    volumes:
      - ./postgres:/docker-entrypoint-initdb.d

  python_app:
    build:
      context: .
      dockerfile: Dockerfile
    depends_on:
      - postgres_db
    networks:
      - app-tier
    command:
      tail -f /dev/null


networks:
  app-tier:
    driver: bridge
YAML

I have a pretty standardized Dockerfile format I use for python apps. I create my stack in a conda env yaml definition, copy it over to my docker container, and install it through conda. I use the base miniconda image, but installing miniconda is a very straightforward process if you are working from a different container type.

FROM continuumio/miniconda3:4.5.11

RUN apt-get update -y; apt-get upgrade -y; apt-get install -y vim-tiny vim-athena ssh

COPY environment.yml environment.yml

RUN conda env create -f environment.yml
RUN echo "alias l='ls -lah'" >> ~/.bashrc
RUN echo "source activate connect" >> ~/.bashrc

# Setting these environmental variables is the functional equivalent of running 'source activate my-conda-env'
ENV CONDA_EXE /opt/conda/bin/conda
ENV CONDA_PREFIX /opt/conda/envs/connect
ENV CONDA_PYTHON_EXE /opt/conda/bin/python
ENV CONDA_PROMPT_MODIFIER (connect)
ENV CONDA_DEFAULT_ENV connect
ENV PATH /opt/conda/envs/connect/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Bash
 
 

You may wonder why I added the source activate connect to my ~/.bashrc AND set the corresponding conda environmental variables. The answer is convenience. I can use my conda env in the docker build itself, say to install pip packages that didn't install correctly as conda packages. I've found that dealing with the docker shell doesn't always work as I expect, and setting the conda env this way ensures it works the way I think it should. 

Add your Python App to your Stack

You can quite easily add a new service to your docker-compose stack. The whole thing looks like this:

version: '3'

# Run as
# docker-compose build; docker-compose up -d
# Check with
# docker ps
# Then check the logs with
# docker logs --tail 50 $container_id
# docker-compose images
# docker-compose logs --tail 20 repo_name

services:

  postgres_db:
    image: postgres:11.1
    environment:   # Set up postgres database name and password
      POSTGRES_PASSWORD: password
      POSTGRES_DB: bookstore
      POSTGRES_USER: root
    ports:    # Set up ports exposed for other containers to connect to
      - 5433:5432
    networks:
      - app-tier
    volumes:
      - ./postgres:/docker-entrypoint-initdb.d

  python_app:
    build:
      context: .
      dockerfile: Dockerfile
    depends_on:
      - postgres_db
    networks:
      - app-tier
    command:
      tail -f /dev/null


networks:
  app-tier:
    driver: bridge
YAML

Specifying the command as tail -f /dev/null in the compose file is a cheap trick so I can keep all my configurations in the same compose file, and exec commands on the python_app container. With docker-compose you can only execute commands on running containers. Without this command the python_app would build and then exit.

Connect to your Postgres DB from your Python App

Let's open up a shell in our python_app service, and test out the connection from there.

➜ docker-compose exec python_app bash
➜ ipython

#From within ipython
from sqlalchemy import create_engine
from datetime import datetime

conn_str = "postgresql+psycopg2://root:password@postgres_db:5432/bookstore"

engine = create_engine(conn_str)
connection = engine.connect()
res = connection.execute("SELECT * FROM pg_catalog.pg_tables;")
Python

As you can see we connected to the postgres_db in the docker-compose stack using the credentials we specified earlier, and were able to execute a command. Once you have these basics down the sky is the limit!

Wrap Up

That's it! I hope I demonstrated just how easy it is to create highly configurable software stacks using docker-compose.

I always recommend grabbing the code and playing around yourself. Please grab the source code and get teching!

Bioinformatics Solutions on AWS Newsletter 

Get the first 3 chapters of my book, Bioinformatics Solutions on AWS, as well as weekly updates on the world of Bioinformatics and Cloud Computing, completely free, by filling out the form next to this text.

Bioinformatics Solutions on AWS

If you'd like to learn more about AWS and how it relates to the future of Bioinformatics, sign up here.

We won't send spam. Unsubscribe at any time.