Configuring a PostgreSQL DB with your Dockerized FastAPI App

UPDATED ON:

undraw svg category

Welcome to Part 2 of Up and Running with FastAPI. If you missed part 1, you can find it here .

This series is focused on building a full-stack application with the FastAPI framework. The app allows users to post requests to have their residence cleaned, and other users can select a cleaning project for a given hourly rate.

Up And Running With FastAPI

Last time we left off, we dockerized our FastAPI backend and setup a router with a single dummy endpoint. The goal of this post will be to implement a data persistence layer. Our application will be powered by the de-facto open-source, relational database - PostgreSQL.

We’ll spend the entirety of this post setting up our database, configuring our db environment, and getting migrations in place. In the next post, we’ll flesh out our router with the appropriate endpoints, and test our endpoints using Pytest and Docker.

There is a LOT of setup that needs to happen before we do any serious development, so be warned. This part can be painful.

Setting up PostgreSQL with Docker

Anyone who’s set up postgres on their machine knows that it can be a pain in the ass at times. What makes Docker so great is how seamlessly we can integrate any database into our currently running application.

We’ll begin by configuring docker to create a container housing our postgres instance. Update your docker-compose.yml file to look like this:

docker-compose.yml

A few things going on here. We’ve added a new service called db and we’ve pulled down the standard postgres 13-alpine image that comes prepackaged with everything we need for our postgres database.

We’re also adding a postgres_data volume to our container. Volumes exist so that our containers can maintain state between runs. We wouldn’t want to drop our database each time we kill our running container.

We’ll need a few additional packages to interact with our database, setup tables, and run migrations. We’ll need to add each of those to our requirements.txt file like so

requirements.txt

We’ve now added new dependencies, so we’ll need to rebuild our containers. Once we do, we’ll be able to make full use of these libraries:

All that’s left now is to build our new containers. Execute the following command:

Once the build process is finished, you should see your container running with a nice warning about no password being set for the database. Take note of that - we’re going to come back and fix it later on.

In another tab, run docker ps to see a list of running Docker processes. Do you see two containers - one for your server and one for your database? If so, congrats. You’ve got your very own PostgreSQL database.

FastAPI Config

It’s best practice in any application to store sensitive information as environment variables, and make sure they’re not checked into git. FastAPI and the underlying framework it’s built on - Starlette - make this relatively straightforward. First, head into the .gitignore file we made earlier and add the following:

.gitignore

Then go ahead and initialize git, and make the first commit.

With that out of the way, we can start adding config data to our .env file.

Each time we clone this repo, this file will have to be recreated, so it’s often a good idea to make a .env.template file with the names (NOT VALUES) of the environment variables the repo requires.

We’re using default for postgres user, password, and db - but don’t worry. We’ll update those later as well.

Now create a config.py file in the core directory.

Inside that file, add the following:

core/config.py

What we’ve done here is compose a number of environment variables into a single database connection string that we’ll use to link our FastAPI app to our postgres db. Starlette provides a Config object that allows us to specify a file - .env - to look for environment variables in. We then create some constants that we’ll use accross our app, cast them to the appropriate data types, and give defaults where we might want them.

NOTE: Any config variable that does not have a default MUST be provided a value in the .env file or an error will be thrown.

As we develop this backend, we’ll revisit the config.py file regularly.

Connecting FastAPI to PostgreSQL

We can now use the connection string we’ve created to connect to our database using FastAPI’s startup event .

Let’s start by creating a db directory to house all database-related code and a tasks.py file to go along with it.

The tasks file we’ve created will establish our database connection and handle any additional configuration we require. We’ll also create a tasks file in the core directory, to wrap the startup and shutdown events for our app.

Open up the core/tasks.py file and add the following:

core/tasks.py

We’ve defined two functions that will run when our application starts up and when our application shuts down. Each one returns an async function that’s responsible for creating our database connection and shutting it down. We can add other tasks here later, but this is all we need for now. We’re importing the connect_to_db and close_db_connection functions from our db/tasks.py file, though we haven’t defined those yet.

Let’s do that now.

db/tasks.py

In the connect_to_db function we’re using the databases package to establish a connection to our postgresql db with the database url string we configured in our core/config.py file. We also add key word arguments for the minimum and maximum number of connections to have at a given time. Don’t worry too much about that for now.

Then, we wait until the connection successfully completes, and we attach it as a _db key to the state object on our FastAPI app. When the app shuts down, we disconnect from the database to clean things up nicely.

Now let’s register those event handlers on our actual application in the server.py file. We’ll also replace some of the original metadata with constants from our config file.

server.py

First, we’ll stop our running container by hitting CONTROL+C, and then spin everything up again with docker-compose up.

If all goes well, you should see a log with the following:

And boom! The database is fired up and ready to go.

Configuring Database Tables and Migrations with SQL Alchemy and Alembic

Migrations serve as documentation on how a database has been created/changed over time. Though there are advantages to creating multiple migration files, we’re going to start with one large migration file and modify it incrementally as we build our application.

Setting up migrations is also a pain, so bear with me here.

Inside our db directory, we’re going to create two more directories - migrations and repositories. We’re also going to create some alembic-specific files for our migrations, and a base.py file in repositories.

If you refer to the Alembic documentation , you’ll notice that we’ve laid down the foundation for setting up a migration environment. The script.py.mako file is a Mako template, will instruct alembic on how to generate new migration scripts, and the env.py file is a Python script that is run whenever the alembic migration tool is invoked.

We’re going to need to specify a directory that our migrations should live in, so let’s do that now.

On top of that, we need to configure our alembic environment - usually done in an alembic.ini file located in our base directory: backend.

The contents of the alembic.ini file are mostly taken directly from the generic example in the documentation.

alembic.ini

Don’t worry too much about what you see here. A majority of the file refers mostly to how alembic should log information to the terminal. The main thing we’ve changed is the script_location, which specifies where our Mako template script is.

The script.py.mako file should look like this:

Our template file will be used to create migration scripts whenever we ask alembic to generate them. Don’t worry if this looks foreign, it’ll make more sense once we put it to use.

And finally, the env.py file.

db/migrations/env.py

Oh wow. That was a lot. If you already have some experience setting up migrations, much of this will look standard to you. If you’re new to this world, don’t try to understand it all at once. The best way to wrap your head around what’s going on here is to use it. And that’s exactly what we’re about to do.

Migrating Our Database

Let’s get some practice executing commands in a running docker container. We’ll do it a bunch, so get familiar with it.

First, open up a new terminal window or tab. Then get the IDs of all running processes like so:

You should see an output that starts with something that looks like the following:

Copy the ID of the container running our server. In this example that would be a123bc007edf . Now we can start executing bash commands as the container’s root use by typing:

The -it part indicates we want to use the container interactively so we can run commands and inspect the container. By default, Docker containers run and exit as soon as they’ve finished executing.

Inside the bash shell, we can start exploring a little bit.

We should see all of the files and directories that Docker has copied into our container.

Feel free to explore the app directory and make sure everything is as it seems.

Now we can start using alembic commands right in the shell. We’ll start by generating our first migration script:

If all is well with our setup, we should see a file has been created inside our db/migrations/versions directory. The two main functions - upgrade and downgrade - will be used to create and drop database tables, execute SQL commands, and each time we migrate. If all is well, the file will resemble the following:

The first thing we’ll do is define an additional function for creating our cleanings table. Then, we’ll run that function inside of the upgrade function and create an opposite action in the downgrade function.

We have 5 columns for each cleaning - id, name, description, cleaning_type, and price.

We use SQLAlchemy to create the different data types for each column. Ids are represented by integers. Name, description, and cleaning_type are represented by text. Price is represented by a floating point number, with 2 decimal places.

Go ahead and run the migrations against the database.

We should see output indicating that alembic is working. If done correctly, there should be a line that looks like this:

Interacting with the database

Ok, almost done.

To make sure that our database has been modified correctly, we’ll execute psql in the db container.

We enter the container running our db service with the -h flag for host and -U flag for user.

Now we can enter commands directly into the terminal to interact with our database.

Here’s a few psql commands that can be useful:

Start by running \d cleanings. If the migration worked, we expect the cleanings table to exist and the 5 columns with the correct datatypes to be displayed in the output.

We can also execute SQL queries directly against the database. Try the following:

We should see the result of the query shown in the terminal consisting of the columns we selected and 0 rows matching the query. That’s because we haven’t added data yet. We’ll get to that next.

Wrapping Up and Resources

At this point, we have a working database and are ready to start populating it with real data. This post has gone on long enough, so we’ll wait until the next one to hammer out those details.

Github Repo

All code up to this point can be found here:

Tags:

Previous Post undraw svg category

Up and Running With FastAPI and Docker

Next Post undraw svg category

Hooking FastAPI Endpoints up to a Postgres Database