Test-Driven Development with Python

Introduction

The task of creating an error free program is not easy. And, if your program runs free of errors, keeping it error-free after an update or change is even more complicated. You don’t want to insert new errors or change correct code with wrong parts.

The answer to this situation (directly from the Oracle of Delphi) is: Testing, Testing, Testing

And the best way to test is to start with tests.

This means: think about what the result should be and then create a Test that checks this. Imagine, you have to write a function for adding two values, and you should describe the functionality.

So, maybe, your description contains one or two examples:

My functions add’s two numbers, e.g 5 plus 7 is 12 (or at least should be 12 :))

The procedure with the TDD is:

  • think and define, what the function should to
  • write a stub for the function, e.g. only function parameters and return type
  • write a function, that tests you function with defines parameters and know result

For our example above, this means:

def add(val1,val2):
    return 0 # this is only a dummy return value
def_test_add():
    result = add(5,7)

    if (result = 12):
        print("everything fine")
    else:
        printf("ups, problems with base arithmetics")

Now, with these in your toolbox, you can always verify your code by running the tests.

$ python test_add.py
ups, problems with base arithmetics

Setup virtual environment

Create virtual environment

$ python3 -m venv .env/python

Activate environment

Add the following line to .bashrc (or .envrc if you are using direnv)

$ . .env/python/bin/activate

Install required packages

$ pip install pytest

Create a sample Application

Prepare folder

Create folder for sources

$ mkdir src
$ cd src

Create sample package

$ mkdir binary
$ cd binary
$ touch __init__.py

Add Unit Tests

We will start with our first test. Create folder for tests and a file test_base.py

$ mkdir tests

Create test_base.py

def test_should_pass():
    pass

def test_should_raise_error():
    raise ValueError

Add more tests

def test_check_if_true_is_true():
    assert True == True

def inc(x):
    return x + 1

def test_check_if_inc_works():
    assert inc(3) == 5

Testing Frameworks

https://wiki.python.org/moin/PythonTestingToolsTaxonomy

Unit testing framework

import unittest

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')

    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())

    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        
        with self.assertRaises(TypeError):
            s.split(2)

if __name__ == '__main__':
    unittest.main()

pytest – helps you write better programms

# content of test_sample.py
def inc(x):
    return x + 1

def test_answer():
    assert inc(3) == 5
$ pytest

nose – is nicer testing for python

def test_numbers_3_4():
    assert multiply(3,4) == 12 
 
def test_strings_a_3():
    assert multiply('a',3) == 'aaa

Python BDD Pattern

class MangoUseCase(TestCase):
  def setUp(self):
    self.user = 'placeholder'

  @mango.given('I am logged-in')
  def test_profile(self):
    self.given.profile = 'profile'
    self.given.photo = 'photo'

    self.given.notifications = 3
    self.given.notifications_unread = 1

    @mango.when('I click profile')
    def when_click_profile():
      print('click')

      @mango.then('I see profile')
      def then_profile():
        self.assertEqual(self.given.profile, 'profile')

      @mango.then('I see my photo')
        def then_photo():
          self.assertEqual(self.given.photo, 'photo')

radsh is not just another BDD tool …THE ROOT FROM RED TO GREEN

from radish import given, when, then

@given("I have the numbers {number1:g} and {number2:g}")
def have_numbers(step, number1, number2):
    step.context.number1 = number1
    step.context.number2 = number2

@when("I sum them")
def sum_numbers(step):
    step.context.result = step.context.number1 + \
        step.context.number2

@then("I expect the result to be {result:g}")
def expect_result(step, result):
    assert step.context.result == result

doctest

"""
The example module supplies one function, factorial().  For example,

>>> factorial(5)
120
"""

def factorial(n):
    """Return the factorial of n, an exact integer >= 0.

    >>> [factorial(n) for n in range(6)]
    [1, 1, 2, 6, 24, 120]
    >>> factorial(30)
    265252859812191058636308480000000
    >>> factorial(-1)
    Traceback (most recent call last):
        ...
    ValueError: n must be >= 0

    Factorials of floats are OK, but the float must be an exact integer:
    >>> factorial(30.1)
    Traceback (most recent call last):
        ...
    ValueError: n must be exact integer
    >>> factorial(30.0)
    265252859812191058636308480000000

    It must also not be ridiculously large:
    >>> factorial(1e100)
    Traceback (most recent call last):
        ...
    OverflowError: n too large
    """

    import math
    if not n >= 0:
        raise ValueError("n must be >= 0")
    if math.floor(n) != n:
        raise ValueError("n must be exact integer")
    if n+1 == n:  # catch a value like 1e300
        raise OverflowError("n too large")
    result = 1
    factor = 2
    while factor <= n:
        result *= factor
        factor += 1
    return result

if __name__ == "__main__":
    import doctest
    doctest.testmod()

Sample Session with Test Frameworks

$ py.test -v
========================================================= test session starts ==========================================================
platform darwin -- Python 3.7.3, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /CLOUD/Development.Anaconda/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /CLOUD/Development.Python/Repositories.FromGithub/repositories/python-toolbox/Working-with-TDD/app, inifile:
plugins: remotedata-0.3.1, openfiles-0.3.2, doctestplus-0.3.0, arraydiff-0.3
collected 4 items

test_base.py::test_should_pass PASSED                                                                                            [ 25%]
test_base.py::test_should_raise_error PASSED                                                                                     [ 50%]
test_base.py::test_check_if_true_is_true PASSED                                                                                  [ 75%]
test_base.py::test_check_if_inc_works PASSED
$ nosetests -v
test_base.test_should_pass ... ok
test_base.test_should_raise_error ... ok
test_base.test_check_if_true_is_true ... ok
test_base.test_check_if_inc_works ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.001s

OK

Links and additional information

http://pythontesting.net/

https://www.xenonstack.com/blog/test-driven-development-big-data/

https://realpython.com/python-testing/