Evaluations and SQL Aggregations in FastAPI

undraw svg category

Welcome to Part 12 of Up and Running with FastAPI. If you missed part 11, you can find it here .

This series is focused on building a full-stack application with the FastAPI framework. The app allows users to post requests to have their residence cleaned, and other users can select a cleaning project for a given hourly rate.

Up And Running With FastAPI

In the last post, we implemented the beginnings of a marketplace-like environment where users can accept, reject, cancel, and rescind offers for cleaning jobs. Now we’re going to provide owners the ability to mark jobs as completed and evaluate the cleaner’s work. On top of that, we’ll provide a way to aggregate user evaluations and serve up a brief overview on the quality of a given cleaner’s work.

Migrations

Of course, we’ll need to update our database to make this happen.

Start by rolling back the current migrations as we’ve done previously.

Then go ahead and create a new table in the same migrations file like so:

Notice how we don’t include the owner of the cleaning job in this table? We don’t need to. The cleaning_id has a unique owners column attached, so there’s no need to add redundant data here.

This table will give the owner of a cleaning job the ability to evaluate a cleaner in 3 categories - professionalism, completeness, and efficiency. These are arbitrary, so feel free to change them. We also provide an overall_rating column for a more generalized evaluation. We again guarantee uniqueness between a cleaning job and a cleaner by using the combination of those two to create our primary key.

Run the migrations with alembic upgrade head and move on to the models.

Evaluation models

Create a new evaluation models file:

And add the following:

models/evaluation.py

For the most part we simply mirror the functionality of the models in our offer.py file, though we’re using a conint type here for ratings. The docs for that constrained type can be found here and allow us to specify that these ratings must be an integer between 0 and 5, inclusive.

The only attribute we require to create an evaluation is overall_rating. We also provide the ability to attach the owner to the EvaluationPublic model if we feel like doing so.

At the bottom we see something new - the EvaluationAggregate model. Here we collect summary statistics about a given cleaner similar to how Amazon, Yelp, and other rating sites do. We’re using conint and confloat to restrict the values slightly.

Now let’s do some testing.

Testing Evaluation Routes

First, create a new testing file:

And create our standard test routes class.

tests/test_evaluations.py

Not much to see here. We’re only testing the existence of three routes - create, get, and list. For the time being, we won’t allow owners to delete or update their evaluations, though that may change later.

Run the tests and watch them fail.

We’ll need to create a routes, dependency, and repository file for our evaluations as well. Might as well do them all together here:

Now add some dummy routes to the routes file.

api/routes/evaluations.py

Let’s also go ahead and register those routes in our api router.

api/routes/__init__.py

This time we’re namespacing the evaluations router under users/{username}/evaluations. It’s essentially the reverse of what we did with offers and will be useful for when we want to aggregate user statistics later on.

Before we can get our tests passing, we’ll need a placeholder EvaluationsRepository, so add this as well.

db/repositories/evaluations.py

We’re integrating the OffersRepository in our __init__ method, as we’ll need it in a minute.

Run the tests now and they should pass.

Create Evaluations

Let’s begin with some standard tests. Here’s the functionality we want to implement:

  1. Owners of a cleaning job can leave evaluations for cleaners who completed that cleaning job.
  2. Evaluations can only be made for jobs that have been accepted.
  3. Evaluations can only be made for users whose offer was accepted for that job.
  4. Evaluations can only be left once per cleaning.
  5. Once an evaluation has been made, the offer should be marked as completed.

Let’s test those conditions with a new test class.

tests/test_evaluations.py

Ok, we have four new test cases that should cover the functionality we are hoping to implement. In the first tests case we’re checking to make sure the status of our offer is "completed" as soon as the evaluation has been created. We haven’t added that option to any of our offer models, so keep that in mind as we start fleshing out our EvaluationsRepository.

In fact, let’s start writing the code our POST route will need so that we can determine how our database interface should look.

api/routes/evaluations.py

This route is mostly dependencies. We return the result of a single call to our EvaluationsRepository with the create_evaluation_for_cleaner method. That method takes in the cleaner who is being evaluated, the cleaning job the cleaner has completed, and the evaluation_create that’s being added to the database. We’ve already defined each of the needed dependencies in our previous posts, so no need to touch our api/dependencies/evaluations.py file just yet.

Let’s head into our EvaluationsRepository.

db/repositories/evaluations.py

Ok, now we’ve got something going. We execute our queries inside a transaction block so that this becomes an all-or-nothing trip to the db. First, we create the new evaluation and attach the cleaning_id and cleaner_id to it. Then, we mark the cleaner’s offer for this job as "completed" using a yet-to-be-created method on the OffersRepository called mark_offer_completed. If all goes well, we return the newly-created evaluation.

We’ll need to define the mark_offer_completed method next, so open up that file.

db/repositories/offers.py

Nothing too crazy here. We simply add a method to set the status of the offer as "completed". We’ll also need to update the models/offer.py file to recognize this option for status, so open up that file and modify it like so:

models/offer.py

And there we go! Run the tests again and the first condition should pass. The other tests are returning 201 responses instead of the 403 and 400s we expect, so we’ll need to create a new dependency to deal with that properly.

api/dependencies/evaluations.py

Though we’re only checking three conditions here, it feels like there’s a lot going on with this dependency. We get the cleaning, cleaner, and offer from path parameters using dependencies we’ve defined in previous posts, and we grab the current user making the request. Composing dependencies in this manner is a clean way to manage resource access. Having already tested their behavior, we can be relatively confident that they’re behaving the way we expect them to.

So all we need to do is check that the currently authenticated user is the owner of the cleaning job, that the status of the offer in question is "accepted", and that the cleaner being evaluated is the one who created the offer.

Let’s update our POST route with this new dependency.

api/routes/evaluations.py

Run those tests again…and they pass! Fantastic.

On to the GET routes.

Fetching Evaluations

Before we start listing evaluations, we’ll need to create a new pytest fixture in our conftest.py file.

We’ll begin by defining a helper function and then move to our fixture.

tests/conftest.py

Ok. So we’re creating 5 new cleaning jobs and making an offer for each one. We then accept the offer and create an evaluation for it with some random rating values. This will be useful when we start aggregating them.

With that in place, let’s start testing.

Add this following new test case to the tests/test_evaluations.py file.

tests/test_evaluations.py

Oh my.

We ensure that any user can fetch a single evaluation or a list of evaluations for a given cleaner. We also expect that users can fetch aggregate ratings for a given cleaner - things like average rating and the number of 5 star ratings, 4 star ratings, etc.

Instead of doing the mean calculations by hand, we’re importing the statistics package and letting it handle the calculations for us.

Run those tests and watch them fail miserably. Looks like we have some work to do.

Let’s hop over to our routes and flesh them out.

api/routes/evaluations.py

As we’ve done before, most of the grunt work is handled by our dependencies.

Important note! The order in which we define these routes ABSOLUTELY DOES MATTER. If we were to put the /stats/ route after our evaluations:get-evaluation-for-cleaner route, that stats route wouldn’t work. FastAPI would assume that “stats” is the ID of a cleaning and throw a 422 error since “stats” is not an integer id.

We’re primarily using two dependendencies to do the evaluation fetching, so we’ll need to create those.

api/routes/evaluations.py

We have two standard dependencies, each using the EvaluationsRepository to fetch evaluations depending on resources fetched from the path parameters.

Let’s add the needed methods to our evals repo.

db/repositories/evaluations.py

Some interesting stuff going on here, mostly in the SQL for our GET_CLEANER_AGGREGATE_RATINGS_QUERY.

We’ll break down each chunk of SELECT statements to illustrate what’s happening. If we only queried the average values, our SQL would look like this:

We find all rows where the cleaner_id matches the user in question. We then use the AVG aggregation to find the mean of each rating category. This is what users probably care about most, and so it’s important to provide that for each cleaner.

Moving on to querying only the sums, here’s what our SQL would look like:

We have two interesting selections and one standard one.

We also have a maximum and minimum overall rating just to round things off.

All together this query provides us with a nice set of aggregate statistics that we can send to our frontend when needed. Doing this in SQL is much more performant than any other approach, so we’ll lean heavily on postgres when we need to.

Run the tests again and they should all pass.

Wrapping Up and Resources

Well there you have it. We’ve added an evaluation system to our application and used SQL aggregations to provide summary statistics for a given cleaner. At the moment there’s no payment system, and we’re letting users sort that out under the table.

Now seems like the right time to start building our front end, so we’ll do that in the next post.

Github Repo

All code up to this point can be found here:

Tags:

Previous Post undraw svg category

Marketplace Functionality in FastAPI

Next Post undraw svg category

Phresh Frontend - Bootstrapping A React App