Testing FastAPI Endpoints with Docker and Pytest

UPDATED ON:

undraw svg category

Welcome to Part 4 of Up and Running with FastAPI. If you missed part 3, you can find it here .

This series is focused on building a full-stack application with the FastAPI framework. The app allows users to post requests to have their residence cleaned, and other users can select a cleaning project for a given hourly rate.

Up And Running With FastAPI

In the previous post we created our CleaningsRepository to act as a database interface for our application, and wrote a single SQL query to insert data into our database. Then we hooked that up to a POST endpoint and created our first cleaning using the interactive docs that FastAPI provides for us. In this post, we’ll follow a TDD approach and create a standard GET endpoint. Instead of mocking database actions, we’ll spin up a testing environment and run all of our tests against a fresh PostgreSQL database to guarantee that our application is functioning as we expect it to.

About Testing

When it comes to testing, developers are stupidly opinionated.

I don’t actually know who’s right, but I do know that I like testing.

Yes, you read that correctly. I truly enjoy testing.

My first assumptions about any problem are almost always flawed in some way. Tests are my best friend because they help me hone my mental models and allow me to juggle fewer variables in my head at a given time. Offload some of that mental strain to automated testing. You’ll thank yourself later.

Especially when I’m just starting on a project, tests highlight my initial misconceptions quickly and give me a more familiar platform for introspection. Since this is a tutorial series on FastAPI, I hope to codify my approach to testing that I’ve developed mostly from examining these three github repos:

  1. FastAPI Template Generation repo
  2. FastAPI Users repo
  3. Real World Fast API repo

Credit where credit is due.

Now on to the testing.

TDD and Pytest

Though it’s nice to test our code post-hoc to ensure that we’re seeing the behavior we expect, there’s definitely a better approach. Test-driven development (TDD) offers a clear guiding principle when developing applications: write the test first, then write enough code to make it pass. We’ll follow this process as we build out the rest of our endpoints.

As for actually writing and running tests, we’ll take advantage of pytest - a mature, full-featured Python testing tool that helps you write better programs. At least, that’s how they put it. Setting up pytest is straightforward.

First, we’ll update our requirements.txt file with our new testing dependencies. There’s quite a few new additions here. Don’t worry, we’ll get to them all in time.

requirements.txt

With these new dependencies in place, we’ll need to rebuild our container.

While we wait for our build to complete, let’s take a look at what we’ve just installed:

When used in conjunction, these packages can be used to construct a robust, extensible testing system for FastAPI applications.

Note: Readers unfamiliar with Pytest are highly recommended to read the docs linked above on configuring environments and using fixtures.

Our tests will live in their own directory, under backend/tests - so we should create that directory if we haven’t already. We’ll also need to add three new files.

In our newly created conftest.py, we’ll add some code needed to configure our testing environment. There is a lot going on here, so we’ll write the code first, and then dissect it.

conftest.py

Alright, that’s a lot of stuff. Let’s break down what’s happening in pieces.

For readers unfamiliar with fixtures, sit tight. We’ll explain those in a bit.

We begin by defining our apply_migrations fixture that will handle migrating our database. We set the scope to session so that the db persists for the duration of the testing session. Though it’s not a requirement, this will speed up our tests significantly since we don’t apply and rollback our migrations for each test . For readers who feel that this isn’t their cup of tea, simply remove that scope parameter and use a fresh db for each test.

Not much else going on here. The fixture sets the TESTING environment variable to "1", so that we can migrate the testing database instead of our standard db. We’ll get to that part in a minute. Then it grabs our alembic migration config and runs all our migrations before yielding to allow all tests to be executed. At the end we rollback our migrations and call it a day.

Our app and db fixtures are standard. We instantiate a new FastAPI app and grab a reference to the database connection in case we need it. In our client fixture, we’re couping LifespanManager and AsyncClient to provide a clean testing client that can send requests to our running FastAPI application. This pattern is adapted directly from the example on the asgi-lifespan Github repo .

That’s all that’s needed for setting up a testing environment in pytest. Now we’ll need to ensure our db config is prepared to handle a testing environment.

Updating Our Database Config

We’ll also need to update some of our database configuration. Open up the db/tasks.py file.

db/tasks.py

In the connect_to_db function we’re using the databases package to establish a connection to our postgresql db with the database url string we configured in our core/config.py file. If we have an environment variable corresponding to the suffix we want placed at the end of the url, we concatenate it (Example: turn postgres into postgres_test). If os.environ has no DB_SUFFIX, then we default to an empty string. This helps us use the testing database for our testing session, and our regular database otherwise.

Next, we’ll open up the db/migrations/env.py file.

db/migrations/env.py

Now we’ve configured our migrations file to support creating and connecting to a testing database. When the TESTING environment variable is set, a postgres_test database will be created. Then the migrations will be run for that database before our tests can be run against it.

There’s an interesting block of code here that should probably be explained.

So what’s happening here? We first connect to our default database with credentials we know are valid. We specify the "AUTOCOMMIT" option for isolation_level to avoid manual transaction management when creating databses. Sqlalchemy always tries to run queries in a transaction, and postgres does not allow users to create databases inside a transaction. To get around that, we end each open transaction automatically after execution. That allows us to drop a database and then create a new one inside of our default connection.

As soon as we’re done there, we move on to connecting to the database we want and off we go!

Writing Tests

Add this code to the test_cleanings.py file:

test_cleanings.py

This is the cool part. We’ve written a simple class that will be used to test that the routes associated with the Cleanings resource exist, and that they behave how we expect them to. We’re using the @pytest.mark.asyncio decorator on each class method so that pytest can deal with asynchronous test functions.

Each class method has two parameters - app and client. Because we created fixtures in our conftest.py file with the same name, Pytest makes these available to any test function.

Fixtures might seem like magic at first, but the pytest docs state:

“Fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or whole projects. Fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself use other fixtures. Test functions can receive fixture objects by naming them as an input argument.”

We’ll use fixtures throughout our tests, so we’ll get a lot of practice using them.

Our first test uses the httpx client we created and issues a POST request to the route with the name of "cleanings:create-cleaning". If you look back at our app/api/routes/cleanings.py file, you’ll see that name in the decorator of our POST route. Anyone familiar with Django will recognize this pattern, as it mirrors Django’s url reversing system. Using this pattern means we don’t have to remember exact paths and can instead rely on the name of the route to send HTTP requests.

We’re asserting that we don’t get a 404 response when this request is sent, and we expect this test to pass.

The second test sends the same request, but expects the response to not include a 422 status code. FastAPI will return 422 status codes whenever the POST body includes an input with an invalid shape. Remember how FastAPI validates all input models using Pydantic? The models we write will determine the shape of the data we expect to receive. Anything else throws a 422.

This test should error out, as FastAPI expects our empty dictionary to have the shape of the CleaningCreate model we defined earlier.

We’ll run our tests by entering bash commands into our running server container.

Then, run the pytest -v command in bash.

This will spin up your container and start executing your tests. The first should pass and the second should fail. When it fails, you’ll see an output that looks like this:

What a nice error! It tells us exactly where the error is and why it failed. The assertion expected the response to NOT have a status code of 422, but it did. Fix the test by setting != to == and you should have the following.

test_cleanings.py

Run the tests again and watch them pass.

Now that we have a testing framework wired up, it’s off to the TDD races. Let’s write a bit more code for our existing POST route, and then we’ll develop tests for a GET endpoint that grabs a cleaning by its id.

Validating Our Post Route

Since we already have a working POST route, our tests won’t follow the traditional TDD structure. We’ll instead write tests to validate behavior of code that we’ve already implemented. The problem with doing it this way is that there’s no guarantee that our tests aren’t just passing arbitrarily - but for now it’ll be alright.

First, start by importing the CleaningCreate model and adding another starlette status code. We’ll also go ahead and define a new fixture for a new cleaning, and use it in the new testing class we’ll create. In addition, we’ll remove the @pytest.mark.asyncio decorators from each test and add pytestmark = pytest.mark.asyncio to have pytest decorate each function for us.

Let’s add the highlighted lines to our test_cleanings.py file.

test_cleanings.py

Here we’re sending a POST request to our application and ensuring that the response coming from our database has the same shape and data as our input.

What’s cool about this test is that it’s actually executing queries against a real postgres database! No mocking needed.

If we run pytest -v again, we should see three passing tests. How do we know that our code is making this test pass? Well, we don’t. That’s why we should always write our tests first, watch them fail, and then write just enough code to make the tests pass. That’s TDD in a nutshell.

Before we do that, however, we should add a more robust test to check for for invalid inputs.

Add one more test to our TestCreateCleaning class.

test_cleanings.py

This test takes advantage of another very useful pytest feature. As stated in the docs , the builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Essentially, we’re able to pass in as many different versions of invalid_payload and status_code as we want, and pytest will generate a test for each set.

Running our tests again, we should now see 8 items passing. Not bad! Let’s move on to a new route.

Creating Endpoints the TDD Way

So we’ve finally gotten to the TDD part of this post. We have enough tools at our disposal that most of this should look familiar. Remember, we’re going to follow a 3 step process:

  1. Write a test and make it fail
  2. Write just enough code to make it pass
  3. Rinse, repeat, and refactor until satisfied

Let’s put together a GET route the TDD way. We’re currently able to insert cleanings into the database, so we’ll probably want to be able to fetch a single cleaning by its id. Confident readers should feel free to try this part out on their own before moving on to the next part. All others should carry on.

We’ll need to bring in a couple imports, and then add an additional test class to our test_cleanings.py file.

test_cleanings.py

Be careful to note that the id keyword argument is passed to the app.url_path_for function and not to the client.get function.

If we run our tests, we’ll see 8 tests passing and 1 test failing with a starlette.routing.NoMatchFound error. There’s a good reason for that too - we haven’t created the route yet. Let’s open up our routes/cleanings.py file and do that now.

routes/cleanings.py

Here we’re creating a get route and adding the id as a path parameter to the route. FastAPI will make that value available to our route function as the variable id.

We start the route by calling get_cleaning_by_id function and pass in the id of the cleaning we’re hoping to fetch. If we don’t get a cleaning returned by that function, we raise a 404 exception.

Running our tests one more time, we expect to see a new error message, and we do: AttributeError: 'CleaningsRepository' object has no attribute 'get_cleaning_by_id'. We haven’t written it yet, so that also makes sense. Head into db/repositories/cleanings.py and update it like so:

db/repositories/cleanings.py

Now we’ve updated our CleaningsRepository with a function that executes the GET_CLEANINGS_BY_ID_QUERY, and searches in our cleanings table for an entry with a given id. If we find one, we return it. Otherwise we return None and our route will throw a 404 error.

Run those tests one more time and everything should pass. Why? Because our database persists for the duration of the testing session. So when we created a cleaning in our previous test, it is available here with an id of 1. That can be confusing, and it’s usually not a good idea to hard code in ids like we did in our test_get_cleaning_by_id function.

A more reliable approach would be to create a test_cleaning fixture and use that throughout our tests. Head into the conftest.py file and create that fixture like so:

conftest.py

With that fixture in place, let’s refactor our test.

test_cleanings.py

Run those tests again and everything should pass, meaning our fixture is working nicely and we don’t need to hardcode in any values. We’ll need to refactor it later on, but for now it should do. Let’s also add another test for invalid id entries.

test_cleanings.py

We’re parametrizing the test and passing in multiple invalid ids. Each one is coupled with the status code we expect to see when we send that id to our GET route. Run those tests, make sure everything passes, and then call it a day.

Wrapping Up and Resources

This post ran a little long, and there’s definitely improvements that could be made to our work. For now though, this’ll do. With our testing framework in place, we can now reflect on what we’ve accomplished:

At this point, we’re finished with most of the setup and can focus on actually developing features for our application from here on out. Readers who want to learn more can refer to this set of resources that were used to design this architecture:

Github Repo

All code up to this point can be found here:

Special thanks to Cameron Tully-Smith, Artur Kosorz, Charlotte Findlay, and S0h3k for correcting errors in the original code.

Tags:

Previous Post undraw svg category

Hooking FastAPI Endpoints up to a Postgres Database

Next Post undraw svg category

Resource Management with FastAPI