Colab

useful resources:

  1. https://stackabuse.com/test-driven-development-with-pytest/
  2. https://docs.pytest.org/en/latest/goodpractices.html#conventions-for-python-test-discovery
  3. https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure
  4. https://github.com/vanzaj/tdd-pytest/blob/master/docs/tdd-pytest/content/tdd-basics.md
  5. https://opensource.com/article/18/6/pytest-plugins

setup

  1. install pytest
  2. install pytest-sugar which will give us nicer output
pip -q install pytest pytest-sugar

# move to tdd directory
from pathlib import Path
if Path.cwd().name != 'tdd':
    %mkdir tdd
    %cd tdd

%pwd

'/content/tdd/tdd'
# cleanup all files
%rm *.py

How pytest discovers tests

pytests uses the following conventions to automatically discovering tests:

  1. files with tests should be called test_*.py or *_test.py
  2. test function name should start with test_

our first test

to see if our code works, we can use the assert python keyword. pytest adds hooks to assertions to make them more useful

%%file test_math.py

import math
def test_add():
    assert 1+1 == 2

def test_mul():
    assert 6*7 == 42

def test_sin():
    assert math.sin(0) == 0

Writing test_math.py

now lets run pytest

!python -m pytest test_math.py 

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 
 test_math.py βœ“βœ“βœ“                                                100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.02s):
       3 passed

Great! we just wrote 3 tests that shows that basic math still works

Hurray!

your turn

write a test for the following function.

if there is a bug in the function, fix it

%%file make_triangle.py

# version 1

def make_triangle(n):
    """
    draws a triangle using '@' letters
    for instance:
        >>> print('\n'.join(make_triangle(3))
        @
        @@
        @@@
    """

    for i in range(n):
        yield '@' * i


Writing make_triangle.py

solution

%%file test_make_triangle.py

from make_triangle import make_triangle

def test_make_triangle():
    expected = "@"
    actual = '\n'.join(make_triangle(1))
    assert actual == expected

Overwriting test_make_triangle.py
!python -m pytest test_make_triangle.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 

―――――――――――――――――――――――――――――― test_make_triangle ――――――――――――――――――――――――――――――

    def test_make_triangle():
        expected = "@"
        actual = '\n'.join(make_triangle(1))
>       assert actual == expected
E       AssertionError: assert '' == '@'
E         + @

test_make_triangle.py:7: AssertionError

 test_make_triangle.py β¨―                                         100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.04s):
       1 failed
         - test_make_triangle.py:4 test_make_triangle

so the expected starts with '@' and the actual starts with '' …

this is a bug! lets fix the code and re-run

%%file make_triangle.py

# version 2 
def make_triangle(n):
    """
    draws a triangle using '@' letters
    for instance:
        >>> print('\n'.join(make_triangle(3))
        @
        @@
        @@@
    """

    for i in range(1, n+1):
        yield '@' * i

Overwriting make_triangle.py
!python -m pytest test_make_triangle.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 1 item                                                               

test_make_triangle.py .                                                  [100%]

=========================== 1 passed in 0.01 seconds ===========================

Pytest context-sensitive comparisons

Reference

pytest has rich support for providing context-sensitive information when it encounters comparisons.

Special comparisons are done for a number of cases:

  • comparing long strings: a context diff is shown
  • comparing long sequences: first failing indices
  • comparing dicts: different entries

Here’s how this looks like for set:

%%file test_compare_fruits.py
def test_set_comparison():
    set1 = set(['Apples', 'Bananas', 'Watermelon', 'Pear',  'Guave', 'Carambola', 'Plum'])
    set2 = set(['Plum', 'Apples', 'Grapes', 'Watermelon','Pear', 'Guave', 'Carambola',  'Melon' ])
    assert set1 == set2

Writing test_compare_fruits.py
!python -m pytest test_compare_fruits.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 

――――――――――――――――――――――――――――― test_set_comparison ――――――――――――――――――――――――――――――

    def test_set_comparison():
        set1 = set(['Apples', 'Bananas', 'Watermelon', 'Pear',  'Guave', 'Carambola', 'Plum'])
        set2 = set(['Plum', 'Apples', 'Grapes', 'Watermelon','Pear', 'Guave', 'Carambola',  'Melon' ])
>       assert set1 == set2
E       AssertionError: assert {'Apples', 'B..., 'Plum', ...} == {'Apples', 'C..., 'Pear', ...}
E         Extra items in the left set:
E         'Bananas'
E         Extra items in the right set:
E         'Melon'
E         'Grapes'
E         Use -v to get the full diff

test_compare_fruits.py:4: AssertionError

 test_compare_fruits.py β¨―                                        100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.03s):
       1 failed
         - test_compare_fruits.py:1 test_set_comparison

your turn

test the following function count_words() and fix any bugs.

the expected output from the function is given in expected_output

expected_output = {
 'and': 2,
 'chief': 2,
 'didnt': 1,
 'efficiency': 1,
 'expected': 1,
 'expects': 1,
 'fear': 2,
 'i': 1,
 'inquisition': 2,
 'is': 1,
 'no': 1,
 'one': 1,
 'our': 1,
 'ruthless': 1,
 'spanish': 2,
 'surprise': 3,
 'the': 2,
 'two': 1,
 'weapon': 1,
 'weapons': 1,
 'well': 1}

%%file spanish_inquisition.py
# version 1: buggy
import collections

quote = """
Well, I didn't expected the Spanish Inquisition ...
No one expects the Spanish Inquisition!
Our chief weapon is surprise, fear and surprise;
two chief weapons, fear, surprise, and ruthless efficiency! 
"""

def remove_punctuation(quote):
    quote.translate(str.maketrans('', '', "',.!?;")).lower()
    return quote

def count_words(quote):
    quote = remove_punctuation(quote)
    return dict(collections.Counter(quote.split(' ')))

Overwriting spanish_inquisition.py

solution

%%file test_spanish_inquisition.py

from spanish_inquisition import *

expected_output = {
 'and': 2,
 'chief': 2,
 'didnt': 1,
 'efficiency': 1,
 'expected': 1,
 'expects': 1,
 'fear': 2,
 'i': 1,
 'inquisition': 2,
 'is': 1,
 'no': 1,
 'one': 1,
 'our': 1,
 'ruthless': 1,
 'spanish': 2,
 'surprise': 3,
 'the': 2,
 'two': 1,
 'weapon': 1,
 'weapons': 1,
 'well': 1}

def test_spanish_inquisition():
    actual = count_words(quote)
    assert actual == expected_output

Overwriting test_spanish_inquisition.py
!python -m pytest -vv test_spanish_inquisition.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
cachedir: .pytest_cache
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 

――――――――――――――――――――――――――― test_spanish_inquisition ―――――――――――――――――――――――――――

    def test_spanish_inquisition():
        actual = count_words(quote)
>       assert actual == expected_output
E       assert {'\n': 1,\n '\nWell,': 1,\n '...\nNo': 1,\n 'I': 1,\n 'Inquisition': 1,\n 'Inquisition!\nOur': 1,\n 'Spanish': 2,\n 'and': 2,\n 'chief': 2,\n "didn't": 1,\n 'efficiency!': 1,\n 'expected': 1,\n 'expects': 1,\n 'fear': 1,\n 'fear,': 1,\n 'is': 1,\n 'one': 1,\n 'ruthless': 1,\n 'surprise,': 2,\n 'surprise;\ntwo': 1,\n 'the': 2,\n 'weapon': 1,\n 'weapons,': 1} == {'and': 2,\n 'chief': 2,\n 'didnt': 1,\n 'efficiency': 1,\n 'expected': 1,\n 'expects': 1,\n 'fear': 2,\n 'i': 1,\n 'inquisition': 2,\n 'is': 1,\n 'no': 1,\n 'one': 1,\n 'our': 1,\n 'ruthless': 1,\n 'spanish': 2,\n 'surprise': 3,\n 'the': 2,\n 'two': 1,\n 'weapon': 1,\n 'weapons': 1,\n 'well': 1}
E         Common items:
E         {'and': 2,
E          'chief': 2,
E          'expected': 1,
E          'expects': 1,
E          'is': 1,
E          'one': 1,
E          'ruthless': 1,
E          'the': 2,
E          'weapon': 1}
E         Differing items:
E         {'fear': 1} != {'fear': 2}
E         Left contains 13 more items:
E         {'\n': 1,
E          '\nWell,': 1,
E          '...\nNo': 1,
E          'I': 1,
E          'Inquisition': 1,
E          'Inquisition!\nOur': 1,
E          'Spanish': 2,
E          "didn't": 1,
E          'efficiency!': 1,
E          'fear,': 1,
E          'surprise,': 2,
E          'surprise;\ntwo': 1,
E          'weapons,': 1}
E         Right contains 11 more items:
E         {'didnt': 1,
E          'efficiency': 1,
E          'i': 1,
E          'inquisition': 2,
E          'no': 1,
E          'our': 1,
E          'spanish': 2,
E          'surprise': 3,
E          'two': 1,
E          'weapons': 1,
E          'well': 1}
E         Full diff:
E           {
E         -  '\n': 1,
E         -  '\nWell,': 1,
E         -  '...\nNo': 1,
E         -  'I': 1,
E         -  'Inquisition': 1,
E         -  'Inquisition!\nOur': 1,
E         -  'Spanish': 2,
E            'and': 2,
E            'chief': 2,
E         -  "didn't": 1,
E         ?  ^     --
E         +  'didnt': 1,
E         ?  ^    +
E         -  'efficiency!': 1,
E         ?             -
E         +  'efficiency': 1,
E            'expected': 1,
E            'expects': 1,
E         -  'fear': 1,
E         ?          ^
E         +  'fear': 2,
E         ?          ^
E         -  'fear,': 1,
E         +  'i': 1,
E         +  'inquisition': 2,
E            'is': 1,
E         +  'no': 1,
E            'one': 1,
E         +  'our': 1,
E            'ruthless': 1,
E         +  'spanish': 2,
E         -  'surprise,': 2,
E         ?           -   ^
E         +  'surprise': 3,
E         ?              ^
E         -  'surprise;\ntwo': 1,
E            'the': 2,
E         +  'two': 1,
E            'weapon': 1,
E         -  'weapons,': 1,
E         ?          -
E         +  'weapons': 1,
E         +  'well': 1,
E           }

test_spanish_inquisition.py:29: AssertionError

 test_spanish_inquisition.py::test_spanish_inquisition β¨―         100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.04s):
       1 failed
         - test_spanish_inquisition.py:27 test_spanish_inquisition
%%file spanish_inquisition.py
# version 2: fixed
import collections

quote = """
Well, I didn't expected the Spanish Inquisition ...
No one expects the Spanish Inquisition!
Our chief weapon is surprise, fear and surprise;
two chief weapons, fear, surprise, and ruthless efficiency! 
"""

def remove_punctuation(quote):
    # quote.translate(str.maketrans('', '', "',.!?;")).lower() # BUG: missing return
    return quote.translate(str.maketrans('', '', "',.!?;")).lower()

def count_words(quote):
    quote = remove_punctuation(quote)
    # return dict(collections.Counter(quote.split(' '))) # BUG
    return dict(collections.Counter(quote.split()))

Overwriting spanish_inquisition.py
!python -m pytest -vv test_spanish_inquisition.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
cachedir: .pytest_cache
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 
 test_spanish_inquisition.py::test_spanish_inquisition βœ“         100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.02s):
       1 passed

Using fixtures to simplify tests

Motivating example

Lets look at an example of class Person, where each person has a name and remembers their friends.

%%file person.py

#version 1
class Person:
    def __init__(self, name, favorite_color, year_born):
        self.name = name
        self.favorite_color = favorite_color
        self.year_born = year_born
        self.friends = set()

    def add_friend(self, other_person):
        if not isinstance(other_person, Person): raise TypeError(other_person, 'is not a', Person)
        self.friends.add(other_person)
        other_person.friends.add(self)

    def __repr__(self):
        return f'Person(name={self.name!r}, '  \
               f'favorite_color={self.favorite_color!r}, ' \
               f'year_born={self.year_born!r}, ' \
               f'friends={[f.name for f in self.friends]})'


Overwriting person.py

Lets write a test for add_friend() function.

notice how the setup for the test is taking so much of the function, while also requiring inventing a lot of repetitious data

is there a way to reduce this boiler plate code

%%file test_person.py

from person import Person

def test_name():
    # setup
    terry = Person(
        'Terry Gilliam',
        'red',
        1940
        )
    
    # test
    assert terry.name == 'Terry Gilliam' 


def test_add_friend():
    # setup for the test 
    terry = Person(
        'Terry Gilliam',
        'red',
        1940
        )
    eric = Person(
        'Eric Idle',
        'blue',
        1943
        )
    
    # actual test
    terry.add_friend(eric)
    assert eric in terry.friends
    assert terry in eric.friends

Overwriting test_person.py
!python -m pytest -q test_person.py

..                                                                       [100%]
2 passed in 0.01 seconds

Fixtures to the rescue

what is we had a magic factory that can conjure up a name, favorite color and birth year?

then we could write our test_name() more simply like this:

def test_name(person_name, favorite_color, birth_year):
    person = Person(person_name, favorite_color, birth_year)
    
    # test
    assert person.name == person_name 

furthermore, if we had a magic factory that can create terry and eric we could write our test_add_friend() function like this:

def test_add_friend(eric, terry):
    eric.add_friend(terry)
    assert eric in terry.friends
    assert terry in eric.friends

fixtures in pytest allow us to create such magic factories using the @pytest.fixture notation.

here’s an example:

%%file test_person_fixtures1.py

import pytest
from person import Person

@pytest.fixture
def person_name():
    return 'Terry Gilliam'

@pytest.fixture
def birth_year():
    return 1940

@pytest.fixture
def favorite_color():
    return 'red'

def test_person_name(person_name, favorite_color, birth_year):
    person = Person(person_name, favorite_color, birth_year)
 
    # test
    assert person.name == person_name 

Overwriting test_person_fixtures1.py
!python -m pytest test_person_fixtures1.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 1 item                                                               

test_person_fixtures1.py .                                               [100%]

=========================== 1 passed in 0.02 seconds ===========================

what’s happening here?

pytest sees that the test function test_person_name(person_name, favorite_color, birth_year) requires three parameters, and searches for fixtures annotated with @pytest.fixture with the same name.

when it finds them, it calls these fixtures on our behalf, and passes the return value as the parameter. in effect, it calls

test_person_name(person_name=person_name(), favorite_color=favorite_color(), birth_year=birth_year()

note how much code this saves

your turn

  1. rewrite the test_add_friend function to accept two parameters def test_add_friend(eric, terry)
  2. write fixtures for eric and terry
  3. run pytest

solution

%%file test_person_fixtures2.py

import pytest
from person import Person

@pytest.fixture
def eric():
    return Person('Eric Idle', 'red', 1943)

@pytest.fixture
def terry():
    return Person('Terry Gilliam', 'blue', 1940)

def test_add_friend(eric, terry):
    eric.add_friend(terry)
    assert eric in terry.friends
    assert terry in eric.friends
    

Writing test_person_fixtures2.py
!python -m pytest -q test_person_fixtures2.py

.                                                                        [100%]
1 passed in 0.02 seconds

parameterizing fixtures

Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture.

Test functions usually do not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways.

%%file test_primes.py

import pytest
import math

def is_prime(x):
    return all(x % factor != 0 for factor in range(2, int(x/2)))

@pytest.fixture(params=[2,3,5,7,11, 13, 17, 19, 101])
def prime_number(request):
    return request.param

def test_prime(prime_number):
    assert is_prime(prime_number) == True

Overwriting test_primes.py
!python -m pytest --verbose test_primes.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /content, inifile:
collected 9 items                                                              

test_primes.py::test_prime[2] PASSED                                     [ 11%]
test_primes.py::test_prime[3] PASSED                                     [ 22%]
test_primes.py::test_prime[5] PASSED                                     [ 33%]
test_primes.py::test_prime[7] PASSED                                     [ 44%]
test_primes.py::test_prime[11] PASSED                                    [ 55%]
test_primes.py::test_prime[13] PASSED                                    [ 66%]
test_primes.py::test_prime[17] PASSED                                    [ 77%]
test_primes.py::test_prime[19] PASSED                                    [ 88%]
test_primes.py::test_prime[101] PASSED                                   [100%]

=========================== 9 passed in 0.03 seconds ===========================

your turn

test is_prime() for non prime numbers

bonus: can you find and fix the bug in is_prime() using a test?

solution

%%file test_non_primes.py

import pytest

FIX_BUG = True
if FIX_BUG:
    def is_prime_fixed(x):
        # notice the +1 - it is important when x=4
        return all(x % factor != 0 for factor in range(2, int(x/2) + 1))
    is_prime = is_prime_fixed
else:
    from test_primes import is_prime

@pytest.fixture(params=[4, 6, 8, 9, 10, 12, 14, 15, 16, 28, 60, 100])
def non_prime_number(request):
    return request.param

def test_non_primes(non_prime_number):
    assert is_prime(non_prime_number) == False

Overwriting test_non_primes.py
!python -m pytest --verbose test_non_primes.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /content, inifile:
collected 12 items                                                             

test_non_primes.py::test_non_primes[4] PASSED                            [  8%]
test_non_primes.py::test_non_primes[6] PASSED                            [ 16%]
test_non_primes.py::test_non_primes[8] PASSED                            [ 25%]
test_non_primes.py::test_non_primes[9] PASSED                            [ 33%]
test_non_primes.py::test_non_primes[10] PASSED                           [ 41%]
test_non_primes.py::test_non_primes[12] PASSED                           [ 50%]
test_non_primes.py::test_non_primes[14] PASSED                           [ 58%]
test_non_primes.py::test_non_primes[15] PASSED                           [ 66%]
test_non_primes.py::test_non_primes[16] PASSED                           [ 75%]
test_non_primes.py::test_non_primes[28] PASSED                           [ 83%]
test_non_primes.py::test_non_primes[60] PASSED                           [ 91%]
test_non_primes.py::test_non_primes[100] PASSED                          [100%]

========================== 12 passed in 0.03 seconds ===========================
all([factor for factor in range(2, int(4/2))])

True
!python -m pytest --verbose test_primes.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /content, inifile:
collected 21 items                                                             

test_primes.py::test_prime[2] PASSED                                     [  4%]
test_primes.py::test_prime[3] PASSED                                     [  9%]
test_primes.py::test_prime[5] PASSED                                     [ 14%]
test_primes.py::test_prime[7] PASSED                                     [ 19%]
test_primes.py::test_prime[11] PASSED                                    [ 23%]
test_primes.py::test_prime[13] PASSED                                    [ 28%]
test_primes.py::test_prime[17] PASSED                                    [ 33%]
test_primes.py::test_prime[19] PASSED                                    [ 38%]
test_primes.py::test_prime[101] PASSED                                   [ 42%]
test_primes.py::test_non_primes[4] FAILED                                [ 47%]
test_primes.py::test_non_primes[6] PASSED                                [ 52%]
test_primes.py::test_non_primes[8] PASSED                                [ 57%]
test_primes.py::test_non_primes[9] PASSED                                [ 61%]
test_primes.py::test_non_primes[10] PASSED                               [ 66%]
test_primes.py::test_non_primes[12] PASSED                               [ 71%]
test_primes.py::test_non_primes[14] PASSED                               [ 76%]
test_primes.py::test_non_primes[15] PASSED                               [ 80%]
test_primes.py::test_non_primes[16] PASSED                               [ 85%]
test_primes.py::test_non_primes[28] PASSED                               [ 90%]
test_primes.py::test_non_primes[60] PASSED                               [ 95%]
test_primes.py::test_non_primes[100] PASSED                              [100%]

=================================== FAILURES ===================================
______________________________ test_non_primes[4] ______________________________

non_prime_number = 4

    def test_non_primes(non_prime_number):
>       assert is_prime(non_prime_number) == False
E       assert True == False
E        +  where True = is_prime(4)

test_primes.py:20: AssertionError
===================== 1 failed, 20 passed in 0.06 seconds ======================

printing and logging within tests

printing

Reference

You can use prints within tests to provide additional debug info.

pytest redirects the output and captured the output of each test. it then:

  • suppresses the output of all successful tests (for brevity)
  • shows the output off all failed tests (for debugging)
  • both stdout and stderr are captured
%%file test_prints.py
import sys

def test_print_success():
    print(
        """
        @@@@@@@@@@@@@@@
        this statement will NOT be printed
        @@@@@@@@@@@@@@@
        """
    )

    assert 6*7 == 42

def test_print_fail():

    print(
        """
        @@@@@@@@@@@@@@@
        this statement WILL be printed
        @@@@@@@@@@@@@@@
        """
    )
    assert True == False


def test_stderr_capture_success():
    print(
        """
        @@@@@@@@@@@@@@@
        this STDERR statement will NOT be printed
        @@@@@@@@@@@@@@@
        """, 
        file=sys.stderr
    )
     
    assert True


def test_stderr_capture_fail():
    print(
        """
        @@@@@@@@@@@@@@@
        this STDERR statement WILL be printed
        @@@@@@@@@@@@@@@
        """, 
        file=sys.stderr
    )
     
    assert False


Overwriting test_prints.py
!python -m pytest -q test_prints.py

.F.F                                                                     [100%]
=================================== FAILURES ===================================
_______________________________ test_print_fail ________________________________

    def test_print_fail():
    
        print(
            """
            @@@@@@@@@@@@@@@
            this statement WILL be printed
            @@@@@@@@@@@@@@@
            """
        )
>       assert True == False
E       assert True == False

test_prints.py:23: AssertionError
----------------------------- Captured stdout call -----------------------------

        @@@@@@@@@@@@@@@
        this statement WILL be printed
        @@@@@@@@@@@@@@@
        
___________________________ test_stderr_capture_fail ___________________________

    def test_stderr_capture_fail():
        print(
            """
            @@@@@@@@@@@@@@@
            this STDERR statement WILL be printed
            @@@@@@@@@@@@@@@
            """,
            file=sys.stderr
        )
    
>       assert False
E       assert False

test_prints.py:49: AssertionError
----------------------------- Captured stderr call -----------------------------

        @@@@@@@@@@@@@@@
        this STDERR statement WILL be printed
        @@@@@@@@@@@@@@@
        
2 failed, 2 passed in 0.04 seconds

logging

Reference

pytest captures log messages of level WARNING or above automatically and displays them in their own section for each failed test in the same manner as captured stdout and stderr.

  • WARNING and above will displayed for failed tests
  • INFO and below will not be displayed

example:

%%file test_logging.py

import logging

logger = logging.getLogger(__name__)

def test_logging_warning_success():
    logger.warning('\n\n @@@ this will NOT be printed \n\n')
    assert True

def test_logging_warning_fail():
    logger.warning('\n\n @@@ this WILL be printed @@@ \n\n')
    assert False

def test_logging_info_fail():
    logger.info('\n\n @@@ this will NOT be printed @@@ \n\n')
    assert False


Overwriting test_logging.py
!python -m pytest test_logging.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 3 items                                                              

test_logging.py .FF                                                      [100%]

=================================== FAILURES ===================================
__________________________ test_logging_warning_fail ___________________________

    def test_logging_warning_fail():
        logger.warning('\n\n @@@ this WILL be printed @@@ \n\n')
>       assert False
E       assert False

test_logging.py:12: AssertionError
------------------------------ Captured log call -------------------------------
test_logging.py             11 WARNING  

 @@@ this WILL be printed @@@
____________________________ test_logging_info_fail ____________________________

    def test_logging_info_fail():
        logger.info('\n\n @@@ this will NOT be printed @@@ \n\n')
>       assert False
E       assert False

test_logging.py:16: AssertionError
====================== 2 failed, 1 passed in 0.04 seconds ======================

your turn

We give below an implementation of the FizzBuzz puzzle:

Write a function that returns the numbers from 1 to 100. But for multiples of three returns β€œFizz” instead of the number and for the multiples of five returns β€œBuzz”. For numbers which are multiples of both three and five return β€œFizzBuzz”.

thus this SHOULD be true

>>> fizzbuzz() # should return the following (abridged) output
[1, 2, 'Fizz', 4, 'Buzz', 6, 7, 8, 'Fizz', 'Buzz', 11, 'Fizz', 13, 14, 'FizzBuzz', ... ]

BUT the implementation is buggy. can you write tests for it and fix it?

%%file fizzbuzz.py

def is_multiple(n, divisor):
    return n % divisor == 0

def fizzbuzz():
    """
    expected output: list with elements numbers 
        [1, 2, 'Fizz', 4, 'Buzz', 6, 7, 8, 'Fizz', 'Buzz', 11, 'Fizz', 13, 14, 'FizzBuzz', ... ]
    """
    result = []
    for i in range(100):
        if is_multiple(i, 3):
            return "Fizz"
        elif is_multiple(i, 5):
            return "Buzz"
        elif is_multiple(i, 3) and is_multiple(i, 5):
            return "FizzBuzz"
        else:
            return i
    
    return result

Overwriting fizzbuzz.py

solution

%%file test_fizzbuzz.py

FIX_BUG = 1
if not FIX_BUG:
    from fizzbuzz import fizzbuzz
else:
    def fizzbuzz_fixed():
        def translate(i):
            if i%3 == 0 and i%5 == 0:
                return "FizzBuzz"
            elif i%3 == 0:
                return "Fizz"
            elif i%5 == 0:
                return "Buzz"
            else:
                return i

        return [translate(i) for i in range(1, 100+1)]

    fizzbuzz = fizzbuzz_fixed


import pytest
@pytest.fixture
def fizzbuzz_result():
    result = fizzbuzz()
    print(result)
    return result

@pytest.fixture
def fizzbuzz_dict(fizzbuzz_result):
    return dict(enumerate(fizzbuzz_result, 1))

def test_fizzbuzz_len(fizzbuzz_result):
    assert len(fizzbuzz_result) == 100

def test_fizzbuzz_len(fizzbuzz_result):
    assert type(fizzbuzz_result) == list

def test_fizzbuzz_first_element(fizzbuzz_dict):
    assert fizzbuzz_dict[1] == 1

def test_fizzbuzz_3(fizzbuzz_dict):
    assert fizzbuzz_dict[3] == 'Fizz'

def test_fizzbuzz_5(fizzbuzz_dict):
    assert fizzbuzz_dict[5] == 'Buzz'

def test_fizzbuzz_15(fizzbuzz_dict):
    assert fizzbuzz_dict[15] == 'FizzBuzz'




Overwriting test_fizzbuzz.py
!python -m pytest test_fizzbuzz.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 5 items                                                              

test_fizzbuzz.py .....                                                   [100%]

=========================== 5 passed in 0.03 seconds ===========================

float: when things are (almost) equal

Reference

consider the following code, what do you expect the result to be?

x = 0.1 + 0.2
y = 0.3
print('x == y', x ==y) # what will it print?
x = 0.1 + 0.2
y = 0.3
print('x == y:', x == y) # what will it print?

x == y: False

if you had anticipated True it means you haven’t tried testing code with float data yet

print(x, '!=', y)

0.30000000000000004 != 0.3

the issue is that float is approxiamtely accurate (enough for most calculations) but may have small rounding errors.

here’e a common but ugly way to test for float equivalence

abs((0.1 + 0.2) - 0.3) < 1e-6

True

here’s a more pythonic and pytest-tic way, using pytest.approx

from pytest import approx
0.1 + 0.2 == approx(0.3)

True

your turn

test that

  • math.sin(0) == 0,
  • math.sin(math.pi / 2) == 1
  • math.sin(math.pi) == 0
  • math.sin(math.pi * 3/2) == -1
  • math.sin(math.pi * 2) == 0

solution

%%file test_sin.py

from pytest import approx
import math
def test_sin():
    assert math.sin(0) == 0
    assert math.sin(math.pi / 2) == 1
    assert math.sin(math.pi) == approx(0)
    assert math.sin(math.pi * 3/2) == approx(-1)
    assert math.sin(math.pi * 2) == approx(0)


Overwriting test_sin.py
!python -m pytest test_sin.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 1 item                                                               

test_sin.py F                                                            [100%]

=================================== FAILURES ===================================
___________________________________ test_sin ___________________________________

    def test_sin():
        assert math.sin(0) == 0
        assert math.sin(math.pi / 2) == 1
>       assert math.sin(math.pi) == 0 #approx(0)
E       assert 1.2246467991473532e-16 == 0
E        +  where 1.2246467991473532e-16 = <built-in function sin>(3.141592653589793)
E        +    where <built-in function sin> = math.sin
E        +    and   3.141592653589793 = math.pi

test_sin.py:7: AssertionError
=========================== 1 failed in 0.03 seconds ===========================

adding timeouts to tests

Reference

Sometimes code gets stuck in an infinite loop, or waiting for a response from a server. Sometimes, tests that run too long is in itself an indication of failure.

how can we add timeouts to tests to avoid getting stuck? the package pytest-timeout solves for that by providing a plugin to pytest.

  1. install the package using pip install pytest-timeout
  2. you can set timeouts individually on tests by marking them with the @pytest.mark.timeout(timeout=60) decorator
  3. you can set the timeout for all tests globally by using the timeout commandline parameter for pytest, like so:pytest --timeout=300
pip install -q pytest-timeout

%%file test_timeouts.py

import pytest

@pytest.mark.timeout(5)
def test_infinite_sleep():
    import time
    while True:
        time.sleep(1)
        print('sleeping ...') 

def test_empty():
    pass

Overwriting test_timeouts.py
!python -m pytest --verbose test_timeouts.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
cachedir: .pytest_cache
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 

――――――――――――――――――――――――――――― test_infinite_sleep ――――――――――――――――――――――――――――――

    @pytest.mark.timeout(5)
    def test_infinite_sleep():
        import time
        while True:
>           time.sleep(1)
E           Failed: Timeout >5.0s

test_timeouts.py:8: Failed
----------------------------- Captured stdout call -----------------------------
sleeping ...
sleeping ...
sleeping ...
sleeping ...

 test_timeouts.py::test_infinite_sleep β¨―                          50% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     
 test_timeouts.py::test_empty βœ“                                  100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (5.03s):
       1 passed
       1 failed
         - test_timeouts.py:4 test_infinite_sleep

notice how the test_empty test still runs and passes, even though the previous test was aborted

your turn

  1. use the requests module to .get() the url http://httpstat.us/101 and call .raise_for_status()
  2. since this will hang forever, use a timeout on the test so that it fails after 5 seconds
  3. since the test is guranteed to fail, mark it with the xfail (expected fail) annotation @pytest.mark.xfail(reason='timeout')
%%file test_http101_timeout.py

import pytest
import requests

@pytest.mark.xfail(reason='timeout')
@pytest.mark.timeout(2)
def test_http101_timeout():
    response = requests.get('http://httpstat.us/101')
    response.raise_for_status()

Overwriting test_http101_timeout.py
!python -m pytest test_http101_timeout.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 
 test_http101_timeout.py x                                       100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (5.22s):
       1 xfailed

testing for exceptions

Reference

consider the following code fragment from person.py:

class Person:
    def add_friend(self, other_person):
        if not isinstance(other_person, Person) raise TypeError(other_person, 'is not a', Person)
        self.friends.add(other_person)
        other_person.friends.add(self)

the add_friend() method will raise an exception if it is used with a parameter which is not a Person

how can we test this?

if we wrap the code that is supposed to throw the exc

%%file test_add_person_exception.py

from person import Person
from test_person_fixtures2 import *

def test_add_person_exception(terry):
    with pytest.raises(TypeError):
        terry.add_friend("a shrubbey!")

def test_add_person_exception_detailed(terry):
    with pytest.raises(TypeError) as excinfo:
        terry.add_friend("a shrubbey!")
    
    assert 'Person' in str(excinfo.value)

@pytest.mark.xfail(reason='expected to fail')
def test_add_person_no_exception(terry, eric):
    with pytest.raises(TypeError): # is expecting an exception that won't happen
        terry.add_friend(eric) # this does not throw an exception


Overwriting test_add_person_exception.py
!python -m pytest test_add_person_exception.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 
 test_add_person_exception.py βœ“βœ“xβœ“                               100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (0.04s):
       3 passed
       1 xfailed

your turn

use the requests module and the .raise_for_status() method

  1. test that .raise_for_status will raise an exception when accessing the following URLs:
    • http://httpstat.us/401
    • http://httpstat.us/404
    • http://httpstat.us/500
    • http://httpstat.us/501
  2. test that .raise_for_status will NOT raise an exception when accessing the following URLs:
    • http://httpstat.us/200
    • http://httpstat.us/201
    • http://httpstat.us/202
    • http://httpstat.us/203
    • http://httpstat.us/204
    • http://httpstat.us/303
    • http://httpstat.us/304

hints:

  1. the requests module raises exceptions of type requests.HTTPError
  2. use parameterized fixtures to avoid writing a lot of tests or boilerplate code
  3. use timeouts to avoid tests that wait forever

solution

%%file test_requests.py

import pytest
import requests

@pytest.fixture(params=[200, 201, 202, 203, 204, 303, 304])
def good_url(request):
    return f'http://httpstat.us/{request.param}'

@pytest.fixture(params=[401, 404, 500, 501])
def bad_url(request):
    return f'http://httpstat.us/{request.param}'

@pytest.mark.timeout(2)
def test_good_urls(good_url):
    response = requests.get(good_url)
    response.raise_for_status()

@pytest.mark.timeout(2)
def test_bad_urls(bad_url):
    response = requests.get(bad_url)
    with pytest.raises(requests.HTTPError):
        response.raise_for_status()

Overwriting test_requests.py
pip install pytest-sugar

Collecting pytest-sugar
  Downloading https://files.pythonhosted.org/packages/da/3b/f1e3c8830860c1df8f0e0f6713932475141210cfa021e362ca2774d2bf02/pytest_sugar-0.9.2-py2.py3-none-any.whl
Requirement already satisfied: packaging>=14.1 in /usr/local/lib/python3.6/dist-packages (from pytest-sugar) (20.1)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from pytest-sugar) (1.1.0)
Requirement already satisfied: pytest>=2.9 in /usr/local/lib/python3.6/dist-packages (from pytest-sugar) (5.3.5)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging>=14.1->pytest-sugar) (2.4.6)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging>=14.1->pytest-sugar) (1.12.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (8.2.0)
Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (0.13.1)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (0.1.8)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (19.3.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (1.8.1)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=2.9->pytest-sugar) (1.5.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=2.9->pytest-sugar) (2.2.0)
Installing collected packages: pytest-sugar
Successfully installed pytest-sugar-0.9.2
!python -m pytest --verbose test_requests.py

Test session starts (platform: linux, Python 3.6.9, pytest 5.3.5, pytest-sugar 0.9.2)
cachedir: .pytest_cache
rootdir: /content
plugins: sugar-0.9.2, xdist-1.31.0, forked-1.1.3, timeout-1.3.4
collecting ... 
 test_requests.py::test_good_urls[200] βœ“                           9% β–‰         
 test_requests.py::test_good_urls[201] βœ“                          18% β–ˆβ–Š        
 test_requests.py::test_good_urls[202] βœ“                          27% β–ˆβ–ˆβ–Š       
 test_requests.py::test_good_urls[203] βœ“                          36% β–ˆβ–ˆβ–ˆβ–‹      
 test_requests.py::test_good_urls[204] βœ“                          45% β–ˆβ–ˆβ–ˆβ–ˆβ–‹     
 test_requests.py::test_good_urls[303] βœ“                          55% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ    
 test_requests.py::test_good_urls[304] βœ“                          64% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ▍   
 test_requests.py::test_bad_urls[401] βœ“                           73% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ▍  
 test_requests.py::test_bad_urls[404] βœ“                           82% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž 
 test_requests.py::test_bad_urls[500] βœ“                           91% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ▏
 test_requests.py::test_bad_urls[501] βœ“                          100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Results (2.12s):
      11 passed

running tests in parallel

Reference

The pytest-xdist plugin extends pytest with some unique test execution modes:

  • test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.
  • –looponfail: run your tests repeatedly in a subprocess. After each run pytest waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.
  • Multi-Platform coverage: you can specify different Python interpreters or different platforms and run tests in parallel on all of them.
  • –boxed and pytest-forked: running each test in its own process, so that if a test catastrophically crashes, it doesn’t interfere with other tests

We’re going to cover only test run parallelization.

first, lets install pytest-xdist:

pip install -qq pytest-xdist

now, lets write a few long running tests

%%file test_parallel.py

import time
def test_t1():
    time.sleep(2)

def test_t2():
    time.sleep(2)

def test_t3():
    time.sleep(2)

def test_t4():
    time.sleep(2)

def test_t5():
    time.sleep(2)

def test_t6():
    time.sleep(2)

def test_t7():
    time.sleep(2)

def test_t8():
    time.sleep(2)

def test_t9():
    time.sleep(2)

def test_t10():
    time.sleep(2)


Writing test_parallel.py

now, we can run these tests in parallel using the pytest -n NUM commandline parameter.

Lets use 10 threads, this will allow us to finish in 2 seconds rather than 20

!python -m pytest -n 10 test_parallel.py

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /content
plugins: xdist-1.31.0, forked-1.1.3, timeout-1.3.4
gw0 [10] / gw1 [10] / gw2 [10] / gw3 [10] / gw4 [10] / gw5 [10] / gw6 [10] / gw7 [10] / gw8 [10] / gw9 [10]
..........                                                               [100%]
============================== 10 passed in 5.94s ==============================

Codebase to test: class Person

Lets reuse the Person and OlympicRunner classes we’ve defined in earlier chapters in order to see how to write tests

%%file person.py

# Person v1
class Person:
    def __init__(self, name):
        name = name
    def __repr__(self):
        return f"{type(self).__name__}({self.name!r})"
    def walk(self):
        print(self.name, 'walking')
    def run(self):
        print(self.name,'running')
    def swim(self):
        print(self.name,'swimming')
        
class OlympicRunner(Person):
    def run(self):
        print(self.name,self.name,"running incredibly fast!")
        
    def show_medals(self):
        print(self.name, 'showing my olympic medals')
    
def train(person):
    person.walk()
    person.swim()
    person.run()

Overwriting person.py

our first test

  • conventions
    1. files with tests should be called test_*.py or *_test.py
    2. test function name should start with test_
  • to see if our code works, we can use the assert python keyword. pytest adds hooks to assertions to make them more useful
%%file test_person1.py
from person import Person

# our first test
def test_preson_name():
    terry = Person('Terry Gilliam')
    assert terry.name == 'Terry Gilliam'

Overwriting test_person1.py
!python -m pytest

============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-3.6.4, py-1.8.1, pluggy-0.7.1
rootdir: /content, inifile:
collected 1 item                                                               

test_person1.py F                                                        [100%]

=================================== FAILURES ===================================
_______________________________ test_preson_name _______________________________

    def test_preson_name():
        terry = Person('Terry Gilliam')
>       assert terry.name == 'Terry Gilliam'
E       AttributeError: 'Person' object has no attribute 'name'

test_person1.py:6: AttributeError
=========================== 1 failed in 0.03 seconds ===========================

lets run our tests

# execute the tests via pytest, arguments are passed to pytest
ipytest.run('-qq')


running our first test

# very simple test
def test_person_repr1():
    assert str(Person('terry gilliam')) == f"Person('terry gilliam')"

# test using mock object
def test_train1():
    person = mocking.Mock()
    
    train(person)
    person.walk.assert_called_once()
    person.run.assert_called_once()
    person.swim.assert_called_once()

# create factory for person's name
@pytest.fixture
def person_name():
    return 'terry gilliam'
    
# create factory for Person, that requires a person_name 
@pytest.fixture
def person(person_name):
    return Person(person_name)

# test using mock object
def test_train2(person):
    # this makes sure no other method is called
    person = mocking.create_autospec(person)
    
    train(person)
    person.walk.assert_called_once()
    person.run.assert_called_once()
    person.swim.assert_called_once()


# test Person using and request a person, person_name from the fixtures
def test_person_repr2(person, person_name):
    assert str(person) == f"Person('{person_name}')"
    
# fixture with multiple values
@pytest.fixture(params=['usain bolt', 'Matthew Wells'])
def olympic_runner_name(request):
    return request.param

@pytest.fixture
def olympic_runner(olympic_runner_name):
    return OlympicRunner(olympic_runner_name)

# test train() using mock object for print
@mocking.patch('builtins.print')
def test_train3(mocked_print, olympic_runner):
    train(olympic_runner)
    mocked_print.assert_called()

# execute the tests via pytest, arguments are passed to pytest
ipytest.run('-qq')

......                                                                                                           [100%]