qubes.tests – Writing tests for qubes

Writing tests is very important for ensuring quality of code that is delivered. Given test case may check for variety of conditions, but they generally fall inside those two categories of conformance tests:

  • Unit tests: these test smallest units of code, probably methods of functions, or even combination of arguments for one specific method.

  • Integration tests: these test interworking of units.

We are interested in both categories.

There is also distinguished category of regression tests (both unit- and integration-level), which are included because they check for specific bugs that were fixed in the past and should not happen in the future. Those should be accompanied with reference to closed ticked that describes the bug.

Qubes’ tests are written using unittest module from Python Standard Library for both unit test and integration tests.

Test case organisation

Every module (like qubes.vm.qubesvm) should have its companion (like qubes.tests.vm.qubesvm). Packages __init__.py files should be accompanied by init.py inside respective directory under tests/. Inside tests module there should be one qubes.tests.QubesTestCase class for each class in main module plus one class for functions and global variables. qubes.tests.QubesTestCase classes should be named TC_xx_ClassName, where xx is two-digit number. Test functions should be named test_xxx_test_name, where xxx is three-digit number. You may introduce some structure of your choice in this number.

Integration tests for Qubes core features are stored in tests/integ/ directory. Additional tests may be loaded from other packages (see extra test loader below). Those tests are run only on real Qubes system and are not suitable for running in VM or in Travis. Test classes of this category inherit from qubes.tests.SystemTestCase.

Writing tests

First of all, testing is art, not science. Testing is not panaceum and won’t solve all of your problems. Rules given in this guide and elsewhere should be followed, but shouldn’t be worshipped.

Test can be divided into three phases. The first part is setup phase. In this part you should arrange for a test condition to occur. You intentionally put system under test in some specific state. Phase two is executing test condition – for example you check some variable for equality or expect that some exception is raised. Phase three is responsible for returning a verdict. This is largely done by the framework.

When writing test, you should think about order of execution. This is the reason of numbers in names of the classes and test methods. Tests should be written bottom-to-top, that is, test setups that are ran later may depend on features that are tested after but not the other way around. This is important, because when encountering failure we expect the reason happen before, and not after failure occured. Therefore, when encountering multiple errors, we may instantly focus on fixing the first one and not wondering if any later problems may be relevant or not. Some people also like to enable unittest.TestResult.failfast feature, which stops on the first failed test – with wrong order this messes up their workflow.

Test should fail for one reason only and test one specific issue. This does not mean that you can use one .assert* method per test_ function: for example when testing one regular expression you are welcome to test many valid and/or invalid inputs, especcialy when test setup is complicated. However, if you encounter problems during setup phase, you should skip the test, and not fail it. This also aids interpretation of results.

You may, when it makes sense, manipulate private members of classes under tests. This violates one of the founding principles of object-oriented programming, but may be required to write tests in correct order if your class provides public methods with circular dependencies. For example containers may check if added item is already in container, but you can’t test __contains__ method without something already inside. Don’t forget to test the other method later.

When developing tests, it may be useful to pause the test on failure and inspect running VMs manually. To do that, set QUBES_TEST_WAIT_ON_FAIL=1 environment variable. This will wait on keypress before cleaning up after a failed tests. It’s recommended to use this feature together with unittest.TestResult.failfast feature (-f option to unittest runner).

Special Qubes-specific considerations

Events

qubes.tests.QubesTestCase provides convenient methods for checking if event fired or not: qubes.tests.QubesTestCase.assertEventFired() and qubes.tests.QubesTestCase.assertEventNotFired(). These require that emitter is subclass of qubes.tests.TestEmitter. You may instantiate it directly:

import qubes.tests

class TC_10_SomeClass(qubes.tests.QubesTestCase):
    def test_000_event(self):
        emitter = qubes.tests.TestEmitter()
        emitter.fire_event('did-fire')
        self.assertEventFired(emitter, 'did-fire')

If you need to snoop specific class (which already is a child of qubes.events.Emitter, possibly indirect), you can define derivative class which uses qubes.tests.TestEmitter as mix-in:

import qubes
import qubes.tests

class TestHolder(qubes.tests.TestEmitter, qubes.PropertyHolder):
    pass

class TC_20_PropertyHolder(qubes.tests.QubesTestCase):
    def test_000_event(self):
        emitter = TestHolder()
        self.assertEventNotFired(emitter, 'did-not-fire')

Dom0

Qubes is a complex piece of software and depends on number other complex pieces, notably VM hypervisor or some other isolation provider. Not everything may be testable under all conditions. Some tests (mainly unit tests) are expected to run during compilation, but many tests (probably all of the integration tests and more) can run only inside already deployed Qubes installation. There is special decorator, qubes.tests.skipUnlessDom0() which causes test (or even entire class) to be skipped outside dom0. Use it freely:

import qubes.tests

class TC_30_SomeClass(qubes.tests.QubesTestCase):
    @qubes.tests.skipUnlessDom0
    def test_000_inside_dom0(self):
        # this is skipped outside dom0
        pass

@qubes.tests.skipUnlessDom0
class TC_31_SomeOtherClass(qubes.tests.QubesTestCase):
    # all tests in this class are skipped
    pass

VM tests

Some integration tests verifies not only dom0 part of the system, but also VM part. In those cases, it makes sense to iterate them for different templates. Additionally, list of the templates can be dynamic (different templates installed, only some considered for testing etc). This can be achieved by creating a mixin class with the actual tests (a class inheriting just from object, instead of qubes.tests.SystemTestCase or unittest.TestCase) and then create actual test classes dynamically using qubes.tests.create_testcases_for_templates(). Test classes created this way will have template set to the template name under test and also this template will be set as the default template during the test execution. The function takes a test class name prefix (template name will be appended to it after ‘_’ separator), a classes to inherit from (in most cases the just created mixin and qubes.tests.SystemTestCase) and a current module object (use sys.modules[__name__]). The function will return created test classes but also add them to the appropriate module (pointed by the module parameter). This should be done in two cases:

  • load_tests() function - when test loader request list of tests

  • on module import time, using a wrapper qubes.tests.maybe_create_testcases_on_import() (will call the function only if explicit list of templates is given, to avoid loading qubes.xml when just importing the module)

An example boilerplate looks like this:

def create_testcases_for_templates():
    return qubes.tests.create_testcases_for_templates('TC_00_AppVM',
        TC_00_AppVMMixin, qubes.tests.SystemTestCase,
        module=sys.modules[__name__])

def load_tests(loader, tests, pattern):
    tests.addTests(loader.loadTestsFromNames(
        create_testcases_for_templates()))
    return tests

qubes.tests.maybe_create_testcases_on_import(create_testcases_for_templates)

This will by default create tests for all the templates installed in the system. Additionally, it is possible to control this process using environment variables:

  • QUBES_TEST_TEMPLATES - space separated list of templates to test

  • QUBES_TEST_LOAD_ALL - create tests for all the templates (by inspecting the qubes.xml file), even at module import time

This is dynamic test creation is intentionally made compatible with Nose2 test runner and its load_tests protocol implementation.

Extra tests

Most tests live in this package, but it is also possible to store tests in other packages while still using infrastructure provided here and include them in the common test run. Loading extra tests is implemented in qubes.tests.extra. To write test to be loaded this way, you need to create test class(es) as usual. You can also use helper class qubes.tests.extra.ExtraTestCase (instead of qubes.tests.SystemTestCase) which provide few convenient functions and hide usage of asyncio for simple cases (like vm.start(), vm.run()).

The next step is to register the test class(es). You need to do this by defining entry point for your package. There are two groups:

  • qubes.tests.extra - for general tests (called once)

  • qubes.tests.extra.for_template - for per-VM tests (called for each template under test)

As a name in the group, choose something unique, preferably package name. An object reference should point at the function that returns a list of test classes.

Example setup.py:

from setuptools import setup

setup(
    name='splitgpg',
    version='1.0',
    packages=['splitgpg'],
    entry_points={
        'qubes.tests.extra.for_template':
            'splitgpg = splitgpg.tests:list_tests',
    }
)

The test loading process can be additionally controlled with environment variables:

  • QUBES_TEST_EXTRA_INCLUDE - space separated list of tests to include (named by a name in an entry point, splitgpg in the above example); if defined, only those extra tests will be loaded

  • QUBES_TEST_EXTRA_EXCLUDE - space separated list of tests to exclude

Module contents

Warning

The test suite hereby claims any domain whose name starts with VMPREFIX as fair game. This is needed to enforce sane test executing environment. If you have domains named test-*, don’t run the tests.

class qubes.tests.QubesTestCase(methodName='runTest')[source]

Bases: TestCase

Base class for Qubes unit tests.

assertEventFired(subject, event, kwargs=None)[source]

Check whether event was fired on given emitter and fail if it did not.

Parameters:
  • subject – emitter which is being checked

  • event (str) – event identifier

  • kwargs (dict) – when given, all items must appear in kwargs passed to an event

assertEventNotFired(subject, event, kwargs=None)[source]

Check whether event was fired on given emitter. Fail if it did.

Parameters:
  • subject – emitter which is being checked

  • event (str) – event identifier

  • kwargs (list) – when given, all items must appear in kwargs passed to an event

assertNotRaises(excClass, callableObj=None, *args, **kwargs)[source]

Fail if an exception of class excClass is raised by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.

If called with callableObj omitted or None, will return a context object used like this:

with self.assertRaises(SomeException):
    do_something()

The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:

with self.assertRaises(SomeException) as cm:
    do_something()
the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)
assertXMLEqual(xml1, xml2, msg='')[source]

Check for equality of two XML objects.

Parameters:
  • xml1 (lxml.etree._Element) – first element

  • xml2 (lxml.etree._Element) – second element

assertXMLIsValid(xml, file=None, schema=None)[source]

Check whether given XML fulfills Relax NG schema.

Schema can be given in a couple of ways:

  • As separate file. This is most common, and also the only way to handle file inclusion. Call with file name as second argument.

  • As string containing actual schema. Put that string in schema keyword argument.

Parameters:
  • xml (lxml.etree._Element) – XML element instance to check

  • file (str) – filename of Relax NG schema

  • schema (str) – optional explicit schema string

cleanup_loop()[source]

Check if the loop is empty

setUp()[source]

Hook method for setting up the test fixture before exercising it.

success()[source]

Check if test was successful during tearDown

class qubes.tests.SystemTestCase(methodName='runTest')[source]

Bases: QubesTestCase

Mixin for integration tests. All the tests here should use self.app object and when need qubes.xml path - should use XMLPATH defined in this file. Every VM created by test, must use SystemTestCase.make_vm_name() for VM name. By default, self.app represents copied host collection. Any preexisting domains should not be modified in any way (including started/stopped). If test need to make some modification, it must clone the VM first.

If some group of tests needs class-wide initialization, first of all the author should consider if it is really needed. But if so, setUpClass can be used to create Qubes(CLASS_XMLPATH) object and create/import required stuff there. VMs created in TestCase.setUpClass() should use self.make_vm_name(’…’, class_teardown=True) for name creation. Such (group of) test need to take care about TestCase.tearDownClass() implementation itself.

create_bootable_iso()[source]

Create simple bootable ISO image. Type ‘poweroff’ to it to terminate that VM.

enter_keys_in_window(title, keys)[source]

Search for window with given title, then enter listed keys there. The function will wait for said window to appear.

Parameters:
  • title – title of window

  • keys – list of keys to enter, as for xdotool key

Returns:

None

qrexec_policy(service, source, destination, allow=True, action=None)[source]

Allow qrexec calls for duration of the test :param service: service name :param source: source VM name :param destination: destination VM name :param allow: add rule with ‘allow’ action, otherwise ‘deny’ :param action: custom action, if specified allow argument is ignored :return:

remove_test_vms(xmlpath='/var/lib/qubes/qubes-test.xml', prefix='test-inst-')[source]

Aggressively remove any domain that has name in testing namespace.

Parameters:

prefix – name prefix of VMs to remove, can be a list of prefixes

setUp()[source]

Hook method for setting up the test fixture before exercising it.

async start_vm(vm)[source]

Start a VM and wait for it to be fully up

wait_for_window(*args, **kwargs)[source]

Wait for a window with a given title. Depending on show parameter, it will wait for either window to show or to disappear.

Parameters:
  • title – title of the window to wait for

  • timeout – timeout of the operation, in seconds

  • show – if True - wait for the window to be visible, otherwise - to not be visible

  • search_class – search based on window class instead of title

  • include_tray – include windows docked in tray

Returns:

window id of found window, if show=True

async wait_for_window_coro(title, search_class=False, include_tray=True, timeout=30, show=True)[source]

Wait for a window with a given title. Depending on show parameter, it will wait for either window to show or to disappear.

Parameters:
  • title – title of the window to wait for

  • timeout – timeout of the operation, in seconds

  • show – if True - wait for the window to be visible, otherwise - to not be visible

  • search_class – search based on window class instead of title

  • include_tray – include windows docked in tray

Returns:

window id of found window, if show=True

async wait_for_window_hide_coro(title, winid, timeout=30)[source]

Wait for window do disappear :param winid: window id :return:

async whonix_gw_setup_async(vm)[source]

Complete whonix-gw initial setup, to enable networking there. The should not be running yet, but will be started in the process.

class qubes.tests.TestEmitter(*args, **kwargs)[source]

Bases: Emitter

Dummy event emitter which records events fired on it.

Events are counted in fired_events attribute, which is collections.Counter instance. For each event, (event, args, kwargs) object is counted. event is event name (a string), args is tuple with positional arguments and kwargs is sorted tuple of items from keyword arguments.

>>> emitter = TestEmitter()
>>> emitter.fired_events
Counter()
>>> emitter.fire_event('event', spam='eggs', foo='bar')
>>> emitter.fired_events
Counter({('event', (1, 2, 3), (('foo', 'bar'), ('spam', 'eggs'))): 1})
fire_event(event, **kwargs)[source]

Call all handlers for an event.

Handlers are called for class and all parent classes, in reversed or true (depending on pre_event parameter) method resolution order. For each class first are called bound handlers (specified in class definition), then handlers from extensions. Aside from above, remaining order is undefined.

This method call only synchronous handlers. If any asynchronous handler is registered for the event, :py:class:RuntimeError is raised.

Parameters:
  • event (str) – event identifier

  • pre_event – is this -pre- event? reverse handlers calling order

Returns:

list of effects

All kwargs are passed verbatim. They are different for different events.

async fire_event_async(event, pre_event=False, **kwargs)[source]

Call all handlers for an event, allowing async calls.

Handlers are called for class and all parent classes, in reversed or true (depending on pre_event parameter) method resolution order. For each class first are called bound handlers (specified in class definition), then handlers from extensions. Aside from above, remaining order is undefined.

This method call both synchronous and asynchronous handlers. Order of asynchronous calls is, by definition, undefined.

See also

fire_event()

Parameters:
  • event (str) – event identifier

  • pre_event – is this -pre- event? reverse handlers calling order

Returns:

list of effects

All kwargs are passed verbatim. They are different for different events.

fired_events

collections.Counter instance

class qubes.tests.substitute_entry_points(group, tempgroup)[source]

Bases: object

Monkey-patch pkg_resources to substitute one group in iter_entry_points

This is for testing plugins, like device classes.

Parameters:
  • group (str) – The group that is to be overloaded.

  • tempgroup (str) – The substitute group.

Inside this context, if one iterates over entry points in overloaded group, the iteration actually happens over the other group.

This context manager is stackable. To substitute more than one entry point group, just nest two contexts.

qubes.tests.create_testcases_for_templates(name, *bases, module, **kwds)[source]

Do-it-all helper for generating per-template tests via load_tests proto

This does several things:
  • creates per-template classes

  • adds them to module’s globals()

  • returns an iterable suitable for passing to loader.loadTestsFromNames

TestCase classes created by this function have implicit .template attribute, which contains name of the respective template. They are also named with given prefix, underscore and template name. If template name contains characters not valid as part of Python identifier, they are impossible to get via standard . operator, though getattr() is still usable.

>>> class MyTestsMixIn:
...     def test_000_my_test(self):
...         assert self.template.startswith('debian')
>>> def load_tests(loader, tests, pattern):
...     tests.addTests(loader.loadTestsFromNames(
...         qubes.tests.create_testcases_for_templates(
...             'TC_00_MyTests', MyTestsMixIn, qubes.tests.SystemTestCase,
...             module=sys.modules[__name__])))

NOTE adding module=sys.modules[__name__] is mandatory, and to allow enforcing this, it uses keyword-only argument syntax, which is only in Python 3.

qubes.tests.expectedFailureIfTemplate(templates)[source]

Decorator for marking specific test as expected to fail only for some templates. Template name is compared as substring, so ‘whonix’ will handle both ‘whonix-ws’ and ‘whonix-gw’. templates can be either a single string, or an iterable

qubes.tests.extra_info(obj)[source]

Return short info identifying object.

For example, if obj is a qube, return its name. This is for use with objgraph package.

qubes.tests.list_templates()[source]

Returns tuple of template names available in the system.

qubes.tests.maybe_create_testcases_on_import(create_testcases_gen)[source]

If certain conditions are met, call create_testcases_gen to create testcases for templates tests. The purpose is to use it on integration tests module(s) import, so the test runner could discover tests without using load tests protocol.

The conditions - any of:
  • QUBES_TEST_TEMPLATES present in the environment (it’s possible to

create test cases without opening qubes.xml) - QUBES_TEST_LOAD_ALL present in the environment

qubes.tests.skipUnlessDom0(test_item)[source]

Decorator that skips test outside dom0.

Some tests (especially integration tests) have to be run in more or less working dom0. This is checked by connecting to libvirt.

qubes.tests.skipUnlessEnv(varname)[source]

Decorator generator for skipping tests without environment variable set.

Some tests require working X11 display, like those using GTK library, which segfaults without connection to X. Other require their own, custom variables.

qubes.tests.skipUnlessGit(test_item)[source]

Decorator that skips test outside git repo.

There are very few tests that an be run only in git. One example is correctness of example code that won’t get included in RPM.

qubes.tests.wait_on_fail(func)[source]

Test decorator for debugging. It pause test execution on failure and wait for user input. It’s useful to manually inspect system state just after test fails, before executing any cleanup.

Usage: decorate a test you are debugging. DO IT ONLY TEMPORARILY, DO NOT COMMIT!

qubes.tests.in_dom0 = False

True if running in dom0, False otherwise

qubes.tests.in_git = '/home/docs/checkouts/readthedocs.org/user_builds/qubes-core-admin/checkouts/latest'

False if outside of git repo, path to root of the directory otherwise