Mdd module unit test docs (#31373)
* new documentation for unit testing - especially module unit testing * unit test documentation reformatting and further fixes * unit test documentation - point to online coverage reports & fix bad spaces * Small copy edits. * First pass copy edit / rewrite. More info needed. * testing documentation - clean up structure, especially code coverage - reduce repetition * module unit test documentation - improved introduction * testing documentation - more fixes from and inspired by review from dharmabumstead * testing documentation - fixes from mattclay + some other minor tweaks * More copy edits. * testing documentation - further fixes from review * Copy edits * Copy edits * More copy edits.
This commit is contained in:
parent
c02173880a
commit
fbbffbabde
4 changed files with 716 additions and 26 deletions
|
@ -33,9 +33,14 @@ At a high level we have the following classifications of tests:
|
||||||
* Tests directly against individual parts of the code base.
|
* Tests directly against individual parts of the code base.
|
||||||
|
|
||||||
|
|
||||||
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
If you're a developer, one of the most valuable things you can do is look at the GitHub
|
||||||
|
issues list and help fix bugs. We almost always prioritize bug fixing over feature
|
||||||
|
development.
|
||||||
|
|
||||||
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
Even for non developers, helping to test pull requests for bug fixes and features is still
|
||||||
|
immensely valuable. Ansible users who understand writing playbooks and roles should be
|
||||||
|
able to add integration tests and so Github pull requests with integration tests that show
|
||||||
|
bugs in action will also be a great way to help.
|
||||||
|
|
||||||
|
|
||||||
Testing within GitHub & Shippable
|
Testing within GitHub & Shippable
|
||||||
|
@ -47,7 +52,6 @@ Organization
|
||||||
|
|
||||||
When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
|
When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
|
||||||
|
|
||||||
|
|
||||||
When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::
|
When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::
|
||||||
|
|
||||||
The test `ansible-test sanity --test pep8` failed with the following errors:
|
The test `ansible-test sanity --test pep8` failed with the following errors:
|
||||||
|
@ -69,7 +73,6 @@ Then run the tests detailed in the GitHub comment::
|
||||||
ansible-test sanity --test pep8
|
ansible-test sanity --test pep8
|
||||||
ansible-test sanity --test validate-modules
|
ansible-test sanity --test validate-modules
|
||||||
|
|
||||||
|
|
||||||
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
|
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
|
||||||
|
|
||||||
Rerunning a failing CI job
|
Rerunning a failing CI job
|
||||||
|
@ -91,10 +94,6 @@ If the issue persists, please contact us in ``#ansible-devel`` on Freenode IRC.
|
||||||
How to test a PR
|
How to test a PR
|
||||||
================
|
================
|
||||||
|
|
||||||
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
|
||||||
|
|
||||||
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
|
||||||
|
|
||||||
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
|
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
|
||||||
|
|
||||||
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
|
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
|
||||||
|
@ -188,8 +187,26 @@ If the PR does not resolve the issue, or if you see any failures from the unit/i
|
||||||
| some other output
|
| some other output
|
||||||
| \```
|
| \```
|
||||||
|
|
||||||
|
Code Coverage Online
|
||||||
|
````````````````````
|
||||||
|
|
||||||
|
`The online code coverage reports <https://codecov.io/gh/ansible/ansible>` are a good way
|
||||||
|
to identify areas for testing improvement in Ansible. By following red colors you can
|
||||||
|
drill down through the reports to find files which have no tests at all. Adding both
|
||||||
|
integration and unit tests which show clearly how code should work, verify important
|
||||||
|
Ansible functions and increase testing coverage in areas where there is none is a valuable
|
||||||
|
way to help improve Ansible.
|
||||||
|
|
||||||
|
The code coverage reports only cover the ``devel`` branch of Ansible where new feature
|
||||||
|
development takes place. Pull requests and new code will be missing from the codecov.io
|
||||||
|
coverage reports so local reporting is needed. Most ``ansible-test`` commands allow you
|
||||||
|
to collect code coverage, this is particularly useful to indicate where to extend
|
||||||
|
testing. See :doc:`testing_running_locally` for more information.
|
||||||
|
|
||||||
|
|
||||||
Want to know more about testing?
|
Want to know more about testing?
|
||||||
================================
|
================================
|
||||||
|
|
||||||
If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/meetings/README.md>`_.
|
If you'd like to know more about the plans for improving testing Ansible then why not join the
|
||||||
|
`Testing Working Group <https://github.com/ansible/community/blob/master/meetings/README.md>`_.
|
||||||
|
|
||||||
|
|
|
@ -45,7 +45,19 @@ Use the ``ansible-test shell`` command to get an interactive shell in the same e
|
||||||
Code Coverage
|
Code Coverage
|
||||||
=============
|
=============
|
||||||
|
|
||||||
Add the ``--coverage`` option to any test command to collect code coverage data.
|
Code coverage reports make it easy to identify untested code for which more tests should
|
||||||
|
be written. Online reports are available but only cover the ``devel`` branch (see
|
||||||
|
:doc:`testing`). For new code local reports are needed.
|
||||||
|
|
||||||
|
Add the ``--coverage`` option to any test command to collect code coverage data. If you
|
||||||
|
aren't using the ``--tox`` or ``--docker`` options which create an isolated python
|
||||||
|
environment then you may have to use the ``--requirements`` option to ensure that the
|
||||||
|
correct version of the coverage module is installed
|
||||||
|
|
||||||
|
ansible-test units --coverage apt
|
||||||
|
ansible-test integration --coverage aws_lambda --tox --requirements
|
||||||
|
ansible-test coverage html
|
||||||
|
|
||||||
|
|
||||||
Reports can be generated in several different formats:
|
Reports can be generated in several different formats:
|
||||||
|
|
||||||
|
@ -53,4 +65,7 @@ Reports can be generated in several different formats:
|
||||||
* ``ansible-test coverage html`` - HTML report.
|
* ``ansible-test coverage html`` - HTML report.
|
||||||
* ``ansible-test coverage xml`` - XML report.
|
* ``ansible-test coverage xml`` - XML report.
|
||||||
|
|
||||||
To clear data between test runs, use the ``ansible-test coverage erase`` command.
|
To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::
|
||||||
|
|
||||||
|
ansible-test coverage --help
|
||||||
|
|
||||||
|
|
|
@ -2,19 +2,24 @@
|
||||||
Unit Tests
|
Unit Tests
|
||||||
**********
|
**********
|
||||||
|
|
||||||
Unit tests are small isolated tests that target a specific library or module.
|
Unit tests are small isolated tests that target a specific library or module. Unit tests
|
||||||
|
in Ansible are currently the only way of driving tests from python within Ansible's
|
||||||
|
continuous integration process. This means that in some circumstances the tests may be a
|
||||||
|
bit wider than just units.
|
||||||
|
|
||||||
.. contents:: Topics
|
.. contents:: Topics
|
||||||
|
|
||||||
Available Tests
|
Available Tests
|
||||||
===============
|
===============
|
||||||
|
|
||||||
Unit tests can be found in `test/units <https://github.com/ansible/ansible/tree/devel/test/units>`_, notice that the directory structure matches that of ``lib/ansible/``
|
Unit tests can be found in `test/units
|
||||||
|
<https://github.com/ansible/ansible/tree/devel/test/units>`_. Notice that the directory
|
||||||
|
structure of the tests matches that of ``lib/ansible/``.
|
||||||
|
|
||||||
Running Tests
|
Running Tests
|
||||||
=============
|
=============
|
||||||
|
|
||||||
Unit tests can be run across the whole code base by doing:
|
The Ansible unit tests can be run across the whole code base by doing:
|
||||||
|
|
||||||
.. code:: shell
|
.. code:: shell
|
||||||
|
|
||||||
|
@ -35,18 +40,22 @@ Or against a specific Python version by doing:
|
||||||
ansible-test units --tox --python 2.7 apt
|
ansible-test units --tox --python 2.7 apt
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
For advanced usage see the online help::
|
For advanced usage see the online help::
|
||||||
|
|
||||||
ansible-test units --help
|
ansible-test units --help
|
||||||
|
|
||||||
|
You can also run tests in Ansible's continuous integration system by opening a pull
|
||||||
|
request. This will automatically determine which tests to run based on the changes made
|
||||||
|
in your pull request.
|
||||||
|
|
||||||
|
|
||||||
Installing dependencies
|
Installing dependencies
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
``ansible-test`` has a number of dependencies , for ``units`` tests we suggest using ``tox``
|
``ansible-test`` has a number of dependencies. For ``units`` tests we suggest using ``tox``.
|
||||||
|
|
||||||
The dependencies can be installed using the ``--requirements`` argument, which will install all the required dependencies needed for unit tests. For example:
|
The dependencies can be installed using the ``--requirements`` argument, which will
|
||||||
|
install all the required dependencies needed for unit tests. For example:
|
||||||
|
|
||||||
.. code:: shell
|
.. code:: shell
|
||||||
|
|
||||||
|
@ -58,7 +67,11 @@ The dependencies can be installed using the ``--requirements`` argument, which w
|
||||||
When using ``ansible-test`` with ``--tox`` requires tox >= 2.5.0
|
When using ``ansible-test`` with ``--tox`` requires tox >= 2.5.0
|
||||||
|
|
||||||
|
|
||||||
The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
|
The full list of requirements can be found at `test/runner/requirements
|
||||||
|
<https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements
|
||||||
|
files are named after their respective commands. See also the `constraints
|
||||||
|
<https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_
|
||||||
|
applicable to all commands.
|
||||||
|
|
||||||
|
|
||||||
Extending unit tests
|
Extending unit tests
|
||||||
|
@ -67,22 +80,98 @@ Extending unit tests
|
||||||
|
|
||||||
.. warning:: What a unit test isn't
|
.. warning:: What a unit test isn't
|
||||||
|
|
||||||
If you start writing a test that requires external services then you may be writing an integration test, rather than a unit test.
|
If you start writing a test that requires external services then
|
||||||
|
you may be writing an integration test, rather than a unit test.
|
||||||
|
|
||||||
|
|
||||||
|
Structuring Unit Tests
|
||||||
|
``````````````````````
|
||||||
|
|
||||||
|
Ansible drives unit tests through `pytest <https://docs.pytest.org/en/latest/>`_. This
|
||||||
|
means that tests can either be written a simple functions which are included in any file
|
||||||
|
name like ``test_<something>.py`` or as classes.
|
||||||
|
|
||||||
|
Here is an example of a function::
|
||||||
|
|
||||||
|
#this function will be called simply because it is called test_*()
|
||||||
|
|
||||||
|
def test_add()
|
||||||
|
a = 10
|
||||||
|
b = 23
|
||||||
|
c = 33
|
||||||
|
assert a + b = c
|
||||||
|
|
||||||
|
Here is an example of a class::
|
||||||
|
|
||||||
|
import unittest:
|
||||||
|
|
||||||
|
class AddTester(unittest.TestCase)
|
||||||
|
|
||||||
|
def SetUp()
|
||||||
|
self.a = 10
|
||||||
|
self.b = 23
|
||||||
|
|
||||||
|
# this function will
|
||||||
|
def test_add()
|
||||||
|
c = 33
|
||||||
|
assert self.a + self.b = c
|
||||||
|
|
||||||
|
# this function will
|
||||||
|
def test_subtract()
|
||||||
|
c = -13
|
||||||
|
assert self.a - self.b = c
|
||||||
|
|
||||||
|
Both methods work fine in most circumstances; the function-based interface is simpler and
|
||||||
|
quicker and so that's probably where you should start when you are just trying to add a
|
||||||
|
few basic tests for a module. The class-based test allows more tidy set up and tear down
|
||||||
|
of pre-requisites, so if you have many test cases for your module you may want to refactor
|
||||||
|
to use that.
|
||||||
|
|
||||||
|
Assertions using the simple ``assert`` function inside the tests will give give full
|
||||||
|
information on the cause of the failure with a trace-back of functions called during the
|
||||||
|
assertion. This means that plain asserts are recommended over other external assertion
|
||||||
|
libraries.
|
||||||
|
|
||||||
|
A number of the unit test suites include functions that are shared between several
|
||||||
|
modules, especially in the networking arena. In these cases a file is created in the same
|
||||||
|
directory, which is then included directly.
|
||||||
|
|
||||||
|
|
||||||
|
Module test case common code
|
||||||
|
````````````````````````````
|
||||||
|
|
||||||
|
Keep common code as specific as possible within the `test/units/` directory structure. For
|
||||||
|
example, if it's specific to testing Amazon modules, it should be in
|
||||||
|
`test/units/modules/cloud/amazon/`. Don't import common unit test code from directories
|
||||||
|
outside the current or parent directories.
|
||||||
|
|
||||||
|
Don't import other unit tests from a unit test. Any common code should be in dedicated
|
||||||
|
files that aren't themselves tests.
|
||||||
|
|
||||||
|
|
||||||
Fixtures files
|
Fixtures files
|
||||||
``````````````
|
``````````````
|
||||||
|
|
||||||
To mock out fetching results from devices, you can use ``fixtures`` to read in pre-generated data.
|
To mock out fetching results from devices, or provide other complex datastructures that
|
||||||
|
come from external libraries, you can use ``fixtures`` to read in pre-generated data.
|
||||||
|
|
||||||
Text files live in ``test/units/modules/network/PLATFORM/fixtures/``
|
Text files live in ``test/units/modules/network/PLATFORM/fixtures/``
|
||||||
|
|
||||||
Data is loaded using the ``load_fixture`` method
|
Data is loaded using the ``load_fixture`` method
|
||||||
|
|
||||||
See `eos_banner test <https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_ for a practical example.
|
See `eos_banner test
|
||||||
|
<https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_
|
||||||
|
for a practical example.
|
||||||
|
|
||||||
Code Coverage
|
If you are simulating APIs you may find that python placebo is useful. See
|
||||||
`````````````
|
doc:`testing_units_modules` for more information.
|
||||||
Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing.
|
|
||||||
|
|
||||||
|
Code Coverage For New or Updated Unit Tests
|
||||||
|
```````````````````````````````````````````
|
||||||
|
New code will be missing from the codecov.io coverage reports (see :doc:`testing`), so
|
||||||
|
local reporting is needed. Most ``ansible-test`` commands allow you to collect code
|
||||||
|
coverage; this is particularly useful when to indicate where to extend testing.
|
||||||
|
|
||||||
To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
|
To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
|
||||||
|
|
||||||
|
@ -99,7 +188,21 @@ Reports can be generated in several different formats:
|
||||||
* ``ansible-test coverage html`` - HTML report.
|
* ``ansible-test coverage html`` - HTML report.
|
||||||
* ``ansible-test coverage xml`` - XML report.
|
* ``ansible-test coverage xml`` - XML report.
|
||||||
|
|
||||||
To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::
|
To clear data between test runs, use the ``ansible-test coverage erase`` command. See
|
||||||
|
:doc:`testing_units_running_locally` for more information about generating coverage
|
||||||
|
reports.
|
||||||
|
|
||||||
ansible-test coverage --help
|
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
:doc:`testing_units_modules`
|
||||||
|
Special considerations for unit testing modules
|
||||||
|
:doc:`testing_running_locally`
|
||||||
|
Running tests locally including gathering and reporting coverage data
|
||||||
|
`Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
|
||||||
|
The documentation of the unittest framework in python 3
|
||||||
|
`Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
|
||||||
|
The documentation of the earliest supported unittest framework - from Python 2.6
|
||||||
|
`pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
|
||||||
|
The documentation of pytest - the framework actually used to run Ansible unit tests
|
||||||
|
|
||||||
|
|
555
docs/docsite/rst/dev_guide/testing_units_modules.rst
Normal file
555
docs/docsite/rst/dev_guide/testing_units_modules.rst
Normal file
|
@ -0,0 +1,555 @@
|
||||||
|
****************************
|
||||||
|
Unit Testing Ansible Modules
|
||||||
|
****************************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
This document explains why, how and when you should use unit tests for Ansible modules.
|
||||||
|
The document doesn't apply to other parts of Ansible for which the recommendations are
|
||||||
|
normally closer to the Python standard. There is basic documentation for Ansible unit
|
||||||
|
tests in the developer guide :doc:`testing_units`. This document should
|
||||||
|
be readable for a new Ansible module author. If you find it incomplete or confusing,
|
||||||
|
please open a bug or ask for help on Ansible IRC.
|
||||||
|
|
||||||
|
What Are Unit Tests?
|
||||||
|
====================
|
||||||
|
|
||||||
|
Ansible includes a set of unit tests in the :file:`test/unit` directory. These tests primarily cover the
|
||||||
|
internals but can also can cover Ansible modules. The structure of the unit tests matches
|
||||||
|
the structure of the code base, so the tests that reside in the :file:`test/unit/modules/` directory
|
||||||
|
are organized by module groups.
|
||||||
|
|
||||||
|
Integration tests can be used for most modules, but there are situations where
|
||||||
|
cases cannot be verified using integration tests. This means that Ansible unit test cases
|
||||||
|
may extend beyond testing only minimal units and in some cases will include some
|
||||||
|
level of functional testing.
|
||||||
|
|
||||||
|
|
||||||
|
Why Use Unit Tests?
|
||||||
|
===================
|
||||||
|
|
||||||
|
Ansible unit tests have advantages and disadvantages. It is important to understand these.
|
||||||
|
Advantages include:
|
||||||
|
|
||||||
|
* Most unit tests are much faster than most Ansible integration tests. The complete suite
|
||||||
|
of unit tests can be run regularly by a developer on their local system.
|
||||||
|
* Unit tests can be run by developers who don't have access to the system which the module is
|
||||||
|
designed to work on, allowing a level of verification that changes to core functions
|
||||||
|
haven't broken module expectations.
|
||||||
|
* Unit tests can easily substitute system functions allowing testing of software that
|
||||||
|
would be impractical. For example, the ``sleep()`` function can be replaced and we check
|
||||||
|
that a ten minute sleep was called without actually waiting ten minutes.
|
||||||
|
* Unit tests are run on different Python versions. This allows us to
|
||||||
|
ensure that the code behaves in the same way on different Python versions.
|
||||||
|
|
||||||
|
There are also some potential disadvantages of unit tests. Unit tests don't normally
|
||||||
|
directly test actual useful valuable features of software, instead just internal
|
||||||
|
implementation
|
||||||
|
|
||||||
|
* Unit tests that test the internal, non-visible features of software may make
|
||||||
|
refactoring difficult if those internal features have to change (see also naming in How
|
||||||
|
below)
|
||||||
|
* Even if the internal feature is working correctly it is possible that there will be a
|
||||||
|
problem between the internal code tested and the actual result delivered to the user
|
||||||
|
|
||||||
|
Normally the Ansible integration tests (which are written in Ansible YAML) provide better
|
||||||
|
testing for most module functionality. If those tests already test a feature and perform
|
||||||
|
well there may be little point in providing a unit test covering the same area as well.
|
||||||
|
|
||||||
|
When To Use Unit Tests
|
||||||
|
======================
|
||||||
|
|
||||||
|
There are a number of situations where unit tests are a better choice than integration
|
||||||
|
tests. For example, testing things which are impossible, slow or very difficult to test
|
||||||
|
with integration tests, such as:
|
||||||
|
|
||||||
|
* Forcing rare / strange / random situations that can't be forced, such as specific network
|
||||||
|
failures and exceptions
|
||||||
|
* Extensive testing of slow configuration APIs
|
||||||
|
* Situations where the integration tests cannot be run as part of the main Ansible
|
||||||
|
continuous integraiton running in Shippable.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Providing quick feedback
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
Example:
|
||||||
|
A single step of the rds_instance test cases can take up to 20
|
||||||
|
minutes (the time to create an RDS instance in Amazon). The entire
|
||||||
|
test run can last for well over an hour. All 16 of the unit tests
|
||||||
|
complete execution in less than 2 seconds.
|
||||||
|
|
||||||
|
The time saving provided by being able to run the code in a unit test makes it worth
|
||||||
|
creating a unit test when bug fixing a module, even if those tests do not often identify
|
||||||
|
problems later. As a basic goal, every module should have at least one unit test which
|
||||||
|
will give quick feedback in easy cases without having to wait for the integration tests to
|
||||||
|
complete.
|
||||||
|
|
||||||
|
Ensuring correct use of external interfaces
|
||||||
|
-------------------------------------------
|
||||||
|
|
||||||
|
Unit tests can check the way in which external services are run to ensure that they match
|
||||||
|
specifications or are as efficient as possible *even when the final output will not be changed*.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
Package managers are often far more efficient when installing multiple packages at once
|
||||||
|
rather than each package separately. The final result is the
|
||||||
|
same: the packages are all installed, so the efficiency is difficult to verify through
|
||||||
|
integration tests. By providing a mock package manager and verifying that it is called
|
||||||
|
once, we can build a valuable test for module efficiency.
|
||||||
|
|
||||||
|
Another related use is in the situation where an API has versions which behave
|
||||||
|
differently. A programmer working on a new version may change the module to work with the
|
||||||
|
new API version and unintentially break the old version. A test case
|
||||||
|
which checks that the call happens properly for the old version can help avoid the
|
||||||
|
problem. In this situation it is very important to include version numbering in the test case
|
||||||
|
name (see `Naming unit tests`_ below).
|
||||||
|
|
||||||
|
Providing specific design tests
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
By building a requirement for a particular part of the
|
||||||
|
code and then coding to that requirement, unit tests _can_ sometimes improve the code and
|
||||||
|
help future developers understand that code.
|
||||||
|
|
||||||
|
Unit tests that test internal implementation details of code, on the other hand, almost
|
||||||
|
always do more harm than good. Testing that your packages to install are stored in a list
|
||||||
|
would slow down and confuse a future developer who might need to change that list into a
|
||||||
|
dictionary for efficiency. This problem can be reduced somewhat with clear test naming so
|
||||||
|
that the future developer immediately knows to delete the test case, but it is often
|
||||||
|
better to simply leave out the test case altogether and test for a real valuable feature
|
||||||
|
of the code, such as installing all of the packages supplied as arguments to the module.
|
||||||
|
|
||||||
|
|
||||||
|
How to unit test Ansible modules
|
||||||
|
================================
|
||||||
|
|
||||||
|
There are a number of techniques for unit testing modules. Beware that most
|
||||||
|
modules without unit tests are structured in a way that makes testing quite difficult and
|
||||||
|
can lead to very complicated tests which need more work than the code. Effectively using unit
|
||||||
|
tests may lead you to restructure your code. This is often a good thing and leads
|
||||||
|
to better code overall. Good restructuring can make your code clearer and easier to understand.
|
||||||
|
|
||||||
|
|
||||||
|
Naming unit tests
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
Unit tests should have logical names. If a developer working on the module being tested
|
||||||
|
breaks the test case, it should be easy to figure what the unit test covers from the name.
|
||||||
|
If a unit test is designed to verify compatibility with a specific software or API version
|
||||||
|
then include the version in the name of the unit test.
|
||||||
|
|
||||||
|
As an example, ``test_v2_state_present_should_call_create_server_with_name()`` would be a
|
||||||
|
good name, ``test_create_server()`` would not be.
|
||||||
|
|
||||||
|
|
||||||
|
Use of Mocks
|
||||||
|
------------
|
||||||
|
|
||||||
|
Mock objects (from https://docs.python.org/3/library/unittest.mock.html) can be very
|
||||||
|
useful in building unit tests for special / difficult cases, but they can also
|
||||||
|
lead to complex and confusing coding situations. One good use for mocks would be in
|
||||||
|
simulating an API. As for 'six', the 'mock' python package is bundled with Ansible (use
|
||||||
|
'import ansible.compat.tests.mock'). See for example
|
||||||
|
|
||||||
|
Ensuring failure cases are visible with mock objects
|
||||||
|
----------------------------------------------------
|
||||||
|
|
||||||
|
Functions like module.fail_json() are normally expected to terminate execution. When you
|
||||||
|
run with a mock module object this doesn't happen since the mock always returns another mock
|
||||||
|
from a function call. You can set up the mock to raise an exception as shown above, or you can
|
||||||
|
assert that these functions have not been called in each test. For example::
|
||||||
|
|
||||||
|
module = MagicMock()
|
||||||
|
function_to_test(module, argument)
|
||||||
|
module.fail_json.assert_not_called()
|
||||||
|
|
||||||
|
This applies not only to calling the main module but almost any other
|
||||||
|
function in a module which gets the module object.
|
||||||
|
|
||||||
|
|
||||||
|
Mocking of the actual module
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
The setup of an actual module is quite complex (see `Passing Arguments`_ below) and often
|
||||||
|
isn't needed for most functions which use a module. Instead you can use a mock object as
|
||||||
|
the module and create any module attributes needed by the function you are testing. If
|
||||||
|
you do this, beware that the module exit functions need special handling as mentioned
|
||||||
|
above, either by throwing an exception or ensuring that they haven't been called. For example::
|
||||||
|
|
||||||
|
class AnsibleExitJson(Exception):
|
||||||
|
"""Exception class to be raised by module.exit_json and caught by the test case"""
|
||||||
|
pass
|
||||||
|
#you may also do the same to fail json
|
||||||
|
module=MagicMock()
|
||||||
|
module.exit_json.side_effect = AnsibleExitJson(Exception)
|
||||||
|
with self.assertRaises(AnsibleExitJson) as result:
|
||||||
|
return = my_module.test_this_function(module, argument)
|
||||||
|
module.fail_json.assert_not_called()
|
||||||
|
assert return["changed"] == True
|
||||||
|
|
||||||
|
API definition with unit test cases
|
||||||
|
-----------------------------------
|
||||||
|
|
||||||
|
API interaction is usually best tested with the function tests defined in Ansible's
|
||||||
|
integration testing section, which run against the actual API. There are several cases
|
||||||
|
where the unit tests are likely to work better.
|
||||||
|
|
||||||
|
Defining a module against an API specification
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
This case is especially important for modules interacting with web services, which provide
|
||||||
|
an API that Ansible uses but which are beyond the control of the user.
|
||||||
|
|
||||||
|
By writing a custom emulation of the calls that return data from the API, we can ensure
|
||||||
|
that only the features which are clearly defined in the specification of the API are
|
||||||
|
present in the message. This means that we can check that we use the correct
|
||||||
|
parameters and nothing else.
|
||||||
|
|
||||||
|
|
||||||
|
*Example: in rds_instance unit tests a simple instance state is defined*::
|
||||||
|
|
||||||
|
def simple_instance_list(status, pending):
|
||||||
|
return {u'DBInstances': [{u'DBInstanceArn': 'arn:aws:rds:us-east-1:1234567890:db:fakedb',
|
||||||
|
u'DBInstanceStatus': status,
|
||||||
|
u'PendingModifiedValues': pending,
|
||||||
|
u'DBInstanceIdentifier': 'fakedb'}]}
|
||||||
|
|
||||||
|
This is then used to create a list of states::
|
||||||
|
|
||||||
|
rds_client_double = MagicMock()
|
||||||
|
rds_client_double.describe_db_instances.side_effect = [
|
||||||
|
simple_instance_list('rebooting', {"a": "b", "c": "d"}),
|
||||||
|
simple_instance_list('available', {"c": "d", "e": "f"}),
|
||||||
|
simple_instance_list('rebooting', {"a": "b"}),
|
||||||
|
simple_instance_list('rebooting', {"e": "f", "g": "h"}),
|
||||||
|
simple_instance_list('rebooting', {}),
|
||||||
|
simple_instance_list('available', {"g": "h", "i": "j"}),
|
||||||
|
simple_instance_list('rebooting', {"i": "j", "k": "l"}),
|
||||||
|
simple_instance_list('available', {}),
|
||||||
|
simple_instance_list('available', {}),
|
||||||
|
]
|
||||||
|
|
||||||
|
These states are then used as returns from a mock object to ensure that the ``await`` function
|
||||||
|
waits through all of the states that would mean the RDS instance has not yet completed
|
||||||
|
configuration::
|
||||||
|
|
||||||
|
rds_i.await_resource(rds_client_double, "some-instance", "available", mod_mock,
|
||||||
|
await_pending=1)
|
||||||
|
assert(len(sleeper_double.mock_calls) > 5), "await_pending didn't wait enough"
|
||||||
|
|
||||||
|
By doing this we check that the ``await`` function will keep waiting through
|
||||||
|
potentially unusual that it would be impossible to reliably trigger through the
|
||||||
|
integration tests but which happen unpredictably in reality.
|
||||||
|
|
||||||
|
Defining a module to work against multiple API versions
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
This case is especially important for modules interacting with many different versions of
|
||||||
|
software; for example, package installation modules that might be expected to work with
|
||||||
|
many different operating system versions.
|
||||||
|
|
||||||
|
By using previously stored data from various versions of an API we can ensure that the
|
||||||
|
code is tested against the actual data which will be sent from that version of the system
|
||||||
|
even when the version is very obscure and unlikely to be available during testing.
|
||||||
|
|
||||||
|
Ansible special cases for unit testing
|
||||||
|
======================================
|
||||||
|
|
||||||
|
There are a number of special cases for unit testing the environment of an Ansible module.
|
||||||
|
The most common are documented below, and suggestions for others can be found by looking
|
||||||
|
at the source code of the existing unit tests or asking on the Ansible IRC channel or mailing
|
||||||
|
lists.
|
||||||
|
|
||||||
|
Module argument processing
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
There are two problems with running the main function of a module:
|
||||||
|
|
||||||
|
* Since the module is supposed to accept arguments on ``STDIN`` it is a bit difficult to
|
||||||
|
set up the arguments correctly so that the module will get them as parameters.
|
||||||
|
* All modules should finish by calling either the ``module.fail_json`` or
|
||||||
|
``module.exit_json``, but these won't work correctly in a testing environment.
|
||||||
|
|
||||||
|
Passing Arguments
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
|
||||||
|
closed since the function below will be provided in a library file.
|
||||||
|
|
||||||
|
To pass arguments to a module correctly, use a function that stores the
|
||||||
|
parameters in a special string variable. Module creation and argument processing is
|
||||||
|
handled through the AnsibleModule object in the basic section of the utilities. Normally
|
||||||
|
this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
|
||||||
|
variable is set it will be treated as if the input came on ``STDIN`` to the module.::
|
||||||
|
|
||||||
|
import json
|
||||||
|
from ansible.module_utils._text import to_bytes
|
||||||
|
|
||||||
|
def set_module_args(args):
|
||||||
|
args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
|
||||||
|
basic._ANSIBLE_ARGS = to_bytes(args)
|
||||||
|
|
||||||
|
simply call that function before setting up your module
|
||||||
|
|
||||||
|
def test_already_registered(self):
|
||||||
|
set_module_args({
|
||||||
|
'activationkey': 'key',
|
||||||
|
'username': 'user',
|
||||||
|
'password': 'pass',
|
||||||
|
})
|
||||||
|
|
||||||
|
Handling exit correctly
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
|
||||||
|
closed since the exit and failure functions below will be provided in a library file.
|
||||||
|
|
||||||
|
The ``module.exit_json()`` function won't work properly in a testing environment since it
|
||||||
|
writes error information to ``STDOUT`` upon exit, where it
|
||||||
|
is difficult to examine. This can be mitigated by replacing it (and module.fail_json) with
|
||||||
|
a function that raises an exception::
|
||||||
|
|
||||||
|
def exit_json(*args, **kwargs):
|
||||||
|
if 'changed' not in kwargs:
|
||||||
|
kwargs['changed'] = False
|
||||||
|
raise AnsibleExitJson(kwargs)
|
||||||
|
|
||||||
|
Now you can ensure that the first function called is the one you expected simply by
|
||||||
|
testing for the correct exception::
|
||||||
|
|
||||||
|
def test_returned_value(self):
|
||||||
|
set_module_args({
|
||||||
|
'activationkey': 'key',
|
||||||
|
'username': 'user',
|
||||||
|
'password': 'pass',
|
||||||
|
})
|
||||||
|
with self.assertRaises(AnsibleExitJson) as result:
|
||||||
|
my_module.main()
|
||||||
|
|
||||||
|
The same technique can be used to replace ``module.fail_json()`` (which is used for failure
|
||||||
|
returns from modules) and for the ``aws_module.fail_json_aws()`` (used in modules for Amazon
|
||||||
|
Web Services).
|
||||||
|
|
||||||
|
Running the main function
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
If you do want to run the actual main function of a module you must import the module, set
|
||||||
|
the arguments as above, set up the appropriate exit exception and then run the module::
|
||||||
|
|
||||||
|
# This test is based around pytest's features for individual test functions
|
||||||
|
import pytest
|
||||||
|
import ansible.modules.module.group.my_modulle as my_module
|
||||||
|
|
||||||
|
def test_main_function(monkeypatch):
|
||||||
|
monkeypatch.setattr(my_module.AnsibleModule, "exit_json", fake_exit_json)
|
||||||
|
set_module_args({
|
||||||
|
'activationkey': 'key',
|
||||||
|
'username': 'user',
|
||||||
|
'password': 'pass',
|
||||||
|
})
|
||||||
|
my_module.main()
|
||||||
|
|
||||||
|
|
||||||
|
Handling calls to external executables
|
||||||
|
--------------------------------------
|
||||||
|
|
||||||
|
Module must use AnsibleModule.run_command in order to execute an external command. This
|
||||||
|
method needs to be mocked:
|
||||||
|
|
||||||
|
Here is a simple mock of AnsibleModule.run_command (taken from test/units/modules/packaging/os/test_rhn_register.py and
|
||||||
|
test/units/modules/packaging/os/rhn_utils.py)::
|
||||||
|
|
||||||
|
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
|
||||||
|
run_command.return_value = 0, '', '' # successful execution, no output
|
||||||
|
with self.assertRaises(AnsibleExitJson) as result:
|
||||||
|
self.module.main()
|
||||||
|
self.assertFalse(result.exception.args[0]['changed'])
|
||||||
|
# Check that run_command has been called
|
||||||
|
run_command.assert_called_once_with('/usr/bin/command args')
|
||||||
|
self.assertEqual(run_command.call_count, 1)
|
||||||
|
self.assertFalse(run_command.called)
|
||||||
|
|
||||||
|
|
||||||
|
A Complete Example
|
||||||
|
------------------
|
||||||
|
|
||||||
|
The following example is a complete skeleton that reuses the mocks explained above and adds a new
|
||||||
|
mock for Ansible.get_bin_path::
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
from ansible.compat.tests import unittest
|
||||||
|
from ansible.compat.tests.mock import patch
|
||||||
|
from ansible.module_utils import basic
|
||||||
|
from ansible.module_utils._text import to_bytes
|
||||||
|
from ansible.modules.namespace import my_module
|
||||||
|
|
||||||
|
|
||||||
|
def set_module_args(args):
|
||||||
|
"""prepare arguments so that they will be picked up during module creation"""
|
||||||
|
args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
|
||||||
|
basic._ANSIBLE_ARGS = to_bytes(args)
|
||||||
|
|
||||||
|
|
||||||
|
class AnsibleExitJson(Exception):
|
||||||
|
"""Exception class to be raised by module.exit_json and caught by the test case"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class AnsibleFailJson(Exception):
|
||||||
|
"""Exception class to be raised by module.fail_json and caught by the test case"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def exit_json(*args, **kwargs):
|
||||||
|
"""function to patch over exit_json; package return data into an exception"""
|
||||||
|
if 'changed' not in kwargs:
|
||||||
|
kwargs['changed'] = False
|
||||||
|
raise AnsibleExitJson(kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def fail_json(*args, **kwargs):
|
||||||
|
"""function to patch over fail_json; package return data into an exception"""
|
||||||
|
kwargs['failed'] = True
|
||||||
|
raise AnsibleFailJson(kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def get_bin_path(self, arg, required=False):
|
||||||
|
"""Mock AnsibleModule.get_bin_path"""
|
||||||
|
if arg.endswith('my_command'):
|
||||||
|
return '/usr/bin/my_command'
|
||||||
|
else:
|
||||||
|
if required:
|
||||||
|
fail_json(msg='%r not found !' % arg)
|
||||||
|
|
||||||
|
|
||||||
|
class TestMyModule(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.mock_module_helper = patch.multiple(basic.AnsibleModule,
|
||||||
|
exit_json=exit_json,
|
||||||
|
fail_json=fail_json,
|
||||||
|
get_bin_path=get_bin_path)
|
||||||
|
self.mock_module_helper.start()
|
||||||
|
self.addCleanup(self.mock_module_helper.stop)
|
||||||
|
|
||||||
|
def test_module_fail_when_required_args_missing(self):
|
||||||
|
with self.assertRaises(AnsibleFailJson):
|
||||||
|
set_module_args({})
|
||||||
|
self.module.main()
|
||||||
|
|
||||||
|
|
||||||
|
def test_ensure_command_called(self):
|
||||||
|
set_module_args({
|
||||||
|
'param1': 10,
|
||||||
|
'param2': 'test',
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(basic.AnsibleModule, 'run_command') as mock_run_command:
|
||||||
|
stdout = 'configuration updated'
|
||||||
|
stderr = ''
|
||||||
|
rc = 0
|
||||||
|
mock_run_command.return_value = rc, stdout, stderr # successful execution
|
||||||
|
|
||||||
|
with self.assertRaises(AnsibleExitJson) as result:
|
||||||
|
my_module.main()
|
||||||
|
self.assertFalse(result.exception.args[0]['changed']) # ensure result is changed
|
||||||
|
|
||||||
|
mock_run_command.assert_called_once_with('/usr/bin/my_command --value 10 --name test')
|
||||||
|
|
||||||
|
|
||||||
|
Restructuring modules to enable testing module set up and other processes
|
||||||
|
-------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Often modules have a main() function which sets up the module and then performs other
|
||||||
|
actions. This can make it difficult to check argument processing. This can be made easier by
|
||||||
|
moving module configuration and initialization into a separate function. For example::
|
||||||
|
|
||||||
|
argument_spec = dict(
|
||||||
|
# module function variables
|
||||||
|
state=dict(choices=['absent', 'present', 'rebooted', 'restarted'], default='present'),
|
||||||
|
apply_immediately=dict(type='bool', default=False),
|
||||||
|
wait=dict(type='bool', default=False),
|
||||||
|
wait_timeout=dict(type='int', default=600),
|
||||||
|
allocated_storage=dict(type='int', aliases=['size']),
|
||||||
|
db_instance_identifier=dict(aliases=["id"], required=True),
|
||||||
|
)
|
||||||
|
|
||||||
|
def setup_module_object():
|
||||||
|
module = AnsibleAWSModule(
|
||||||
|
argument_spec=argument_spec,
|
||||||
|
required_if=required_if,
|
||||||
|
mutually_exclusive=[['old_instance_id', 'source_db_instance_identifier',
|
||||||
|
'db_snapshot_identifier']],
|
||||||
|
)
|
||||||
|
return module
|
||||||
|
|
||||||
|
def main():
|
||||||
|
module = setup_module_object()
|
||||||
|
validate_parameters(module)
|
||||||
|
conn = setup_client(module)
|
||||||
|
return_dict = run_task(module, conn)
|
||||||
|
module.exit_json(**return_dict)
|
||||||
|
|
||||||
|
This now makes it possible to run tests against the module initiation function::
|
||||||
|
|
||||||
|
def test_rds_module_setup_fails_if_db_instance_identifier_parameter_missing():
|
||||||
|
# db_instance_identifier parameter is missing
|
||||||
|
set_module_args({
|
||||||
|
'state': 'absent',
|
||||||
|
'apply_immediately': 'True',
|
||||||
|
})
|
||||||
|
|
||||||
|
with self.assertRaises(AnsibleFailJson) as result:
|
||||||
|
self.module.setup_json
|
||||||
|
|
||||||
|
See also ``test/units/module_utils/aws/test_rds.py``
|
||||||
|
|
||||||
|
Note that the argument_spec dictionary is visible in a module variable. This has
|
||||||
|
advantages, both in allowing explicit testing of the arguments and in allowing the easy
|
||||||
|
creation of module objects for testing.
|
||||||
|
|
||||||
|
The same restructuring technique can be valuable for testing other functionality, such as the part of the module which queries the object that the module configures.
|
||||||
|
|
||||||
|
Traps for maintaining Python 2 compatibility
|
||||||
|
============================================
|
||||||
|
|
||||||
|
If you use the ``mock`` library from the Python 2.6 standard library, a number of the
|
||||||
|
assert functions are missing but will return as if successful. This means that test cases should take great care *not* use
|
||||||
|
functions marked as _new_ in the Python 3 documentation, since the tests will likely always
|
||||||
|
succeed even if the code is broken when run on older versions of Python.
|
||||||
|
|
||||||
|
A helpful development approach to this should be to ensure that all of the tests have been
|
||||||
|
run under Python 2.6 and that each assertion in the test cases has been checked to work by breaking
|
||||||
|
the code in Ansible to trigger that failure.
|
||||||
|
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
:doc:`testing_units`
|
||||||
|
Ansible unit tests documentation
|
||||||
|
:doc:`testing_running_locally`
|
||||||
|
Running tests locally including gathering and reporting coverage data
|
||||||
|
:doc:`developing_modules`
|
||||||
|
How to develop modules
|
||||||
|
`Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
|
||||||
|
The documentation of the unittest framework in python 3
|
||||||
|
`Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
|
||||||
|
The documentation of the earliest supported unittest framework - from Python 2.6
|
||||||
|
`pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
|
||||||
|
The documentation of pytest - the framework actually used to run Ansible unit tests
|
||||||
|
`Development Mailing List <http://groups.google.com/group/ansible-devel>`_
|
||||||
|
Mailing list for development topics
|
||||||
|
`Testing Your Code (from The Hitchhiker's Guide to Python!) <http://docs.python-guide.org/en/latest/writing/tests/>`_
|
||||||
|
General advice on testing Python code
|
||||||
|
`Uncle Bob's many videos on YouTube <https://www.youtube.com/watch?v=QedpQjxBPMA&list=PLlu0CT-JnSasQzGrGzddSczJQQU7295D2>`_
|
||||||
|
Unit testing is a part of the of various philosophies of software development, including
|
||||||
|
Extreme Programming (XP), Clean Coding. Uncle Bob talks through how to benfit from this
|
||||||
|
`"Why Most Unit Testing is Waste" http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf`
|
||||||
|
An article warning against the costs of unit testing
|
||||||
|
`'A Response to "Why Most Unit Testing is Waste"' https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-is-waste/`
|
||||||
|
An response pointing to how to maintain the value of unit tests
|
Loading…
Reference in a new issue