Python Testing with Pytest

June 15, 2022

pytest

This document is an attempt at immortalizing some of the learnings that I’ve made related to pytest and python testing. It should be noted that this document is mainly focused on the python testing framework pytest

Test Setup

Pytest follows standard test discovery rules generally, but runs all files in the current directory and its subdirectories in the form: test_*.py or *_test.py

Tests are grouped into classes (which can be nested), which can themselves contain more than one test.

I’ve followed the convention of having a class containing other classes to represent the file, with internal classes for each method, and test methods for the test cases that you may encounter from the method under test.

As an example (this isn’t actual production code):

# file under test: generator.py class Generator: def generate(self, value=1): return value # content of test_generator.py from . import Generator class TestGenerator: class TestGenerate: def test_generate_returns_default(self): generator = Generator() assert(generator.generate() == 1) def test_generate_returns_value(self): generator = Generator(value=2) assert(generator.generate() == 2) # potential other generate behavior ...

⭐️ Note: Each test function must be prefaced with the substring test_

Running Particular Tests

Assuming i’m in the same directory as the above test file, issuing the command pytest should be sufficient to find and run the test as well as any other tests in other test files.

Default Since I use poetry, this is poetry run python -m pytest

However, you can specify a test to run with the -k flag. This allows you to just pass the test name to the runner.

For a specific file poetry run python -m pytest -k test_generate_returns_default

You can also accomplish this similarly by passing the node ID. This is useful after running a test suite, as the node ID is outputted by the test runner. A situation in which this is commonly useful is when you have tests using inheritance, and want to run a specific instance of a parent test.

By node ID poetry run python -m pytest -v test_generator.py::TestGenerator::test_generate

By default running tests in pytest do not print to standard output. However, if you pass the flag -s to your test command the print statements can be shown. This is useful as an alternative to setting a breakpoint. (See more here)

With standard output poetry run python -m pytest -s

Breakpoints

Useful while programming in python as well, newest versions of python allow you to add breakpoint() at any line in python which when running the code brings up a PDB debugger to allow you to check your state.

This is also very useful when writing tests, as you can place a debugger in your tests as well as your tested code.

Assertions

Assertions are fairly standard amongst testing frameworks and tools. I won’t spend much time here since there isn’t much nuance or differentiation in Pytest.

After creating instances of classes and calling methods on those instances, we can verify that our behavior is as expected by making assertions. Running the tests ensure that these assertions hold, as shown by a full suite of passing tests.

Assertion Example

Here we have packages returned from the generate_packages() call. In this particular example we expect there to be 0 generated packages, so we can make an assertion against that fact.

packages = Builder(package_request=GetPackagesRequestFactory(sized_items=[item])).generate_packages() assert len(packages) == 0

Assertions with Exceptions

One thing to note however is how python deals with expecting an exception. (You can read more about this here)

def test_zero_division(): with pytest.raises(ZeroDivisionError): 1 / 0

Factories

I use factory boy, which is a library based on thoughtbot’s factory_bot.

Factories are classes that you can initialize that represent complex objects with specific data customized for the current test, which allow you to establish defaults and only declare test-specific fields.

To create a factory you create a class that inherits from factory.Factory, and declares a set of attributes that are used to instantiate the Python object that the factory is modeling. The class of the object that is being modeled is defined in the model field of the class Meta attribute seen as follows:

import factory from ...models import DietaryObject class DietaryObjectFactory(factory.Factory): class Meta: model = DietaryObject vegetarian = False gluten_free = False vegan = False kosher = False halal = False

I use factories extensively throughout tests, with a couple of additional helper methods that the factory library provides.

For example if a test uses multiple factory instances, then the factory can be setup to create unique attributes.

Using Sequences

m_category_uuid = factory.Sequence(lambda n: f'category-uuid-{n}')

In this way, each subsequent factory instance created has a different value for m_category_uuid, ensuring uniqueness for this value.

You can also have scenarios when defining factories that require attributes to be create from their own factories. This is usually encountered when the object mocked contains another object as an attribute.

Using SubFactories

filters = factory.SubFactory(PackageFiltersFactory)

Using a Factory in a Test

Once the corresponding factory has been imported, it can be used as means to setup the data used in a given test. Most of the time this code can be shared amongst tests corresponding to a given method in a fixture, but this is something we’ll get into in the following section.

As mentioned, we can override the default values defined in the factory class with our own values in accordance to what we are testing.

In the following example, a breakfast item is indicated by the existence of the “breakfast” category name. Since the default doesn’t align with this, we’ll have to overwrite it

breakfast_item = SizedItemFactory(m_item_uuid='breakfast-uuid', m_category_uuid='category1', category_name='breakfast')

Using Build Batch

If you are in a situation that mandates that multiple instances of a factory object need to be created (with similar attributes), then .build_batch can be useful.

The return value is a collection with size equal to the first argument, with each index containing an instance of the factory that the method is called on.

items = SizedItemFactory.build_batch(2, ordered_count=1) ''' items => [SizedItem(ordered_count=1), SizedItem(ordered_count=1)] '''

Fixtures

Often when writing several tests related to a given method on a class, you find that the test data setup is often the same for many. One way to share code amongst multiple test blocks is to use fixtures. By preventing the necessity of having to care about setup or cleanup details, fixture functions act as dependency injection where fixture functions take the role of the injector and test functions are the consumers of fixture objects.

Fixtures are created when first requested by a test, and are destroyed based on their scope:

  • function: the default scope, the fixture is destroyed at the end of the test.
  • class: the fixture is destroyed during teardown of the last test in the class.
  • module: the fixture is destroyed during teardown of the last test in the module.
  • package: the fixture is destroyed during teardown of the last test in the package.
  • session: the fixture is destroyed at the end of the test session.

Using a Fixture

class TestPartitioner: @pytest.fixture def create_items(self): item1 = SizedItemFactory(ordered_count=4, m_item_uuid='item-uuid-1') item2 = SizedItemFactory(ordered_count=2, m_item_uuid='item-uuid-2') item3 = SizedItemFactory(ordered_count=1, m_item_uuid='item-uuid-3') return [item1, item2, item3] class TestPartitions: def test_partitions_default_diversity(self, create_items): '''Returns 2 partitions''' partitions = Partitioner(sized_items=create_items, headcount=11).partition() assert (len(partitions) == 2) # ... # ...

In this example, we define a fixture by adding a decorator to a function (@pytest.fixture).

You can see all fixtures available, their scopes, and their docstrings by running: poetry run python -m pytest --fixtures

Here the test data setup of sized_items composed of 3 items with different order counts can be shared amongst many tests. In order to use the fixture, the function name of the fixture must be passed as an argument to the test itself. If the fixture is contained in the current scope or parent scope, it will be pulled in and used within the test.

Passing Arguments to Fixtures

There may be situations encountered in which a fixture is useful to share amongst multiple tests, but the tests necessitate differentiating setup values between them.

The “factory as fixture” pattern can help in situations where the result of a fixture is needed multiple times in a single test. Instead of returning data directly, the fixture instead returns a function which generates the data. This function can then be called multiple times in the test with specific parameters passed into the function.

@pytest.fixture def mock_item_sampler(mocker): '''Returns a function that takes an item and mocks out the sampler with that item''' def _mock_item_sampler(item): mock_class = mocker.Mock() mock_class.return_value.sample.return_value = item mocker.patch("catering_packages.builders.probabalistic_sampling_builder.ItemSampler", mock_class) return mock_class return _mock_item_sampler class TestProbabalisticSamplingBuilder: class TestItemSampler: def test_item_sampler_called(self, mock_item_sampler): '''It properly calls the item sampler''' item = SizedItemFactory(quantity=1, serves=1) mock_class = mock_item_sampler(item) builder = ProbabalisticSamplingBuilder( package_request=GetPackagesRequestFactory(headcounts=PackageHeadcountsFactory(total=2), sized_items=[item])) builder.generate_packages() mock_class.assert_any_call(sized_items=[item], remaining_headcount=2, package_items=builder.packages[0].main_items) mock_class.assert_any_call(sized_items=[item], remaining_headcount=1, package_items=builder.packages[0].main_items) # ... # ...

The above example includes mocking, which is a concept that we will soon approach. However, what is more important to call out is that in this case the fixture that is used in the test_item_sampler_called() is passed an argument that is used in the fixture during setup. In this way the fixture can be shared amongst multiple tests by setting up a certain object or class in the same way, but with different return values or attributes.

In this case, the item that is used in the fixture can be differentiated amongst tests.

Parametrization

class TestPackage: @pytest.mark.parametrize("dietary_field", DietaryObject.FIELDS) class TestDietaryItems: def test_dietary_items(self, dietary_field): '''It returns the associated items''' package_items = PackageItems() package = Package(**{f'{dietary_field}_items': package_items}) assert package.dietary_items(dietary_field) == package_items

Fixture functions or test functions can be parametrized, meaning that they will be called multiple times with each run passing in a new value for the variable(s) passed as arguments.

When fixtures are defined in this way, they execute the tests that use the fixture multiple times.

In the above example, the use of dietary_field in our test is mapped to the list DietaryObject.FIELDS, meaning that in each run a new value is pulled from the collection to be represented by dietary_field in test.

In the test output when running a parametrized tests, a new test instance will be used for each value in the collection that can either pass or fail.

Mocking

So those familiar with RSpec will be familiar with the notion of spies and testing doubles. In python, mocking allows you to do similar things where you can replace pieces of your code with mocked objects and then make certain assertions about how those mocked objects have been used.

Note: This entire section speaks on concepts particular to pytest-mock, a library that changes the usage of the standard mock API. One benefit of this lib is that using the standard API doesn’t scale well when you have multiple patches to apply, due to the necessity of having multiple decorations or chained with statements. One of the other benefits is auto-removal of mocks at the end of the test. With the standard mocking, the mock would not be removed automatically when the test completes and could potentially pollute other tests.

The core of mocking begins with the use of .patch

Patching a class replaces the class with a Mock instance (that you can also customize). If the class is instantiated in your code under test, then the instantiation will be the return_value of the mock that you’ve defined instead. If you’re mocking a specific method on a class, the method will be replaced with the Mock instance that you define.

There are two core ways of associating mocks with classes or methods on classes: patch and patch.object

mocker.patch requires you to specify the path to the function being patched as a first argument. This should be a string in the form: package.module.ClassName. This is imported and the specified object replaced with the new object, so it must be importable from the environment you are calling patch() from.

The second argument is the thing that you are replacing the class with.

mocker.patch.object is useful because you can mock specific method calls on a given class. You can either call patch.object() with three arguments or two arguments. The three argument form takes the object to be patched, the attribute name and the object to replace the attribute with. In the two argument case a mocker is generated and used instead of the one that you pass.

Using patch and patch.object

@pytest.fixture def mocked_random(mocker): mocked_random = mocker.Mock() mocker.patch.object(np, 'random', mocked_random) mocker.patch("packages.item_sampler.np", np) return mocked_random

Note: In the above example, as with all cases when using mocks you need to have mocker as an argument to the method. This is because mocker itself is a pytest fixture, and fixtures are brought into functions via arguments. The code containing mocks do not need to be fixtures, however this type of setup usually ends up being shared across tests so fixtures are common.

In the above example, we are first defining a Mock object to be used that we can later make assertions against (whether it was called, how many times it was called, what arguments it was passed, etc.)

Then we pass this Mock object to be used in place of the random call on the np class. Now we have the Mock object associated properly to a class properly, but we don’t have it tied to the class that is used in the actual code that we are testing.

To do so, we patch the specific class that is used in the file under test, and replace the class there with the class that we have patched in our test. In this way, when the random method is called in that test, it will be the Mock object that we have defined that is used instead.

Using return_value

Once we define the mocks, we can also say that they have return_values. This is useful when we want to return a specific shape or value from our Mock. There are two syntactical approaches for this as follows:

One Approach (inline)

@pytest.fixture def mock_np_choice_class(self, mocker): np_choice_class = mocker.Mock() mocker.patch.object(np.random, 'choice', np_choice_class) np_choice_class.return_value = 0 mocker.patch("partitioner", np) return np_choice_class

Another Approach (when defining the mock):

def _mock_entrees_category(items, is_entrees): category = CategoryFactory(sized_items=items) mocker.patch.object(category, "is_entrees", mocker.Mock(return_value=is_entrees)) return category

Using side_effect to return different values from each call

def test_can_select_from_same_category(self, mocker): item_1 = SizedItemFactory(m_item_uuid='item1', m_category_uuid='category1') item_2 = SizedItemFactory(m_item_uuid='item2', m_category_uuid='category1') mock_item_sampler = mocker.Mock() mock_item_sampler.sample.side_effect = [item_1, item_2] mocker.patch('builders.probabalistic_sampling_builder.ItemSampler', return_value=mock_item_sampler) packages = ProbabalisticSamplingBuilder(package_request=GetPackagesRequestFactory( package_count=2, headcounts=PackageHeadcountsFactory(total=1), sized_items=[item_1, item_2])).generate_packages() assert len(packages[0].main_items.items()) == 1 assert len(packages[1].main_items.items()) == 1 first_package_sized_items = [package_item.sized_item for package_item in packages[0].main_items.items()] second_package_sized_items = [package_item.sized_item for package_item in packages[1].main_items.items()] items = first_package_sized_items + second_package_sized_items assert set([item_1, item_2]) == set(first_package_sized_items + second_package_sized_items)

Let’s say we’re in a situation in which a method is called multiple times, but we need to model the behavior to return different values for each call. In this case, we use side_effect and pass a list to the mocked function (line 6). In this way, the return_value of the first call is mapped to the first entry, second call to second entry and so on.

Using side_effect to throw an error

solve_error_mock.side_effect = SolverError()

The typical approach for throwing errors uses side_effects as they are not necessarily values returned from a method call.

Assertions on Mocks

Once we’ve defined the mocks, we can make assertions against them! In this way we can verify that certain methods are called, verify how many times they’re called, and ensure that they are passed the right arguments.

A complete list of the possible assertions can be found here

It is worth noting that certain methods (assert_called_once, assert_called_with) are dangerous on Mock instances. As MagicMock deliberately allows any method to be called on it, if you make a typo in these assertion methods, eg assert_called_once, the test will always pass.

It’s often safer to skip using these methods and assert against mock.call_args, mock.call_args_list, or mock.mock_calls instead, eg.

assert len(mock.mock_calls) == 1.

mock.call_args_list or mock.call_args can be slightly difficult to destructure, but returns a list of all of the arguments that a function was called with

Using mock.call_args

def test_solver_error(self, mocker, main_subproblem, mock_problem_status): mock_problem_status() problem_class = mocker.Mock() solve_error_mock = mocker.Mock() solve_error_mock.side_effect = SolverError() mocker.patch.object(cp, "Problem", problem_class) problem_class.return_value.solve = solve_error_mock solved_subproblems = Problem(subproblems=[main_subproblem], budget=150).solve() assert (logger.info.call_args[0][0] == 'timeout reached')

If you have the mock stored as a variable, you can make assertions against it directly, otherwise you can make assertions against the mocked object as follows:

def test_filters(self, mocker): '''Tests that the from_grpc_message function is called on the filters''' filters = pb2.PackageFilters() mocked_filters = mocker.Mock(spec=PackageFilters) mocker.patch.object(PackageFilters, 'from_grpc_message', mocker.Mock(return_value=mocked_filters)) message = pb2.GetPackagesRequest(filters=filters) request = GetPackagesRequest.from_grpc_message(message) assert request.filters == mocked_filters PackageFilters.from_grpc_message.assert_called_once_with(filters)

Conclusion

Testing with pytest is somewhat simple, but many edge-case scenarios can be found that can be slightly tricky to setup. Hopefully this guide helps a bit! 🐍


© 2024 Phillip Michalowski