Re: [PATCH 3/5] KVM test: add wrapper for RHEL-6 style unittests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/25/2010 02:33 AM, Lucas Meneghel Rodrigues wrote:
> From: Michael Goldish <mgoldish@xxxxxxxxxx>
> 
> Based on Naphtali Sprei's patches.
> 
> Changes from v2:
> - Determine dynamically what tests are available and
> what are the options for them, based on the unittest
> directory (that is supposed to be linked during the
> build test)
> 
> Changes from v1:
> - Determine success/failure by exit status instead of output
> - Restructure loop so that vm.is_dead() is called less often
> - Copy test log to debugdir/unittest.log
> - Change parameters passed to wait_for()
> 
> Signed-off-by: Michael Goldish <mgoldish@xxxxxxxxxx>
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@xxxxxxxxxx>
> ---
>  client/tests/kvm/tests/unittest.py |  114 ++++++++++++++++++++++++++++++++++++
>  1 files changed, 114 insertions(+), 0 deletions(-)
>  create mode 100644 client/tests/kvm/tests/unittest.py
> 
> diff --git a/client/tests/kvm/tests/unittest.py b/client/tests/kvm/tests/unittest.py
> new file mode 100644
> index 0000000..84e778c
> --- /dev/null
> +++ b/client/tests/kvm/tests/unittest.py
> @@ -0,0 +1,114 @@
> +import logging, time, os, shutil, glob, ConfigParser
> +from autotest_lib.client.common_lib import error
> +import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_preprocessing
> +
> +
> +def run_unittest(test, params, env):
> +    """
> +    KVM RHEL-6 style unit test:
> +    1) Resume a stopped VM
> +    2) Wait for VM to terminate
> +    3) If qemu exited with code = 0, the unittest passed. Otherwise, it failed
> +    4) Collect all logs generated
> +
> +    @param test: kvm test object
> +    @param params: Dictionary with the test parameters
> +    @param env: Dictionary with test environment
> +    """
> +    unittest_dir = os.path.join(test.bindir, 'unittests')
> +    if not os.path.isdir(unittest_dir):
> +        raise error.TestError("No unittest dir %s available (did you run the "
> +                              "build test first?)" % unittest_dir)
> +    os.chdir(unittest_dir)

Why chdir()?

> +    unittest_list = glob.glob('*.flat')
> +    if not unittest_list:
> +        raise error.TestError("No unittest files available (did you run the "
> +                              "build test first?)")
> +    logging.debug('Flat file list: %s', unittest_list)
> +
> +    unittest_cfg = os.path.join(unittest_dir, 'unittests.cfg')
> +    parser = ConfigParser.ConfigParser()
> +    parser.read(unittest_cfg)
> +    test_list = parser.sections()
> +
> +    if not test_list:
> +        raise error.TestError("No tests listed on config file %s" %
> +                              unittest_cfg)
> +    logging.debug('Unit test list: %s' % test_list)
> +
> +    if params.get('test_list', None):
> +        test_list = eval(params.get('test_list'))

I wonder why you use eval() here.  It's somewhat inconsistent with how
we usually define lists in config files.

> +        logging.info('Original test list overriden by user')
> +        logging.info('User defined unit test list: %s' % test_list)
> +
> +    nfail = 0
> +    tests_failed = []
> +
> +    timeout = int(params.get('unittest_timeout', 600))
> +
> +    for t in test_list:
> +        file = None
> +        if parser.has_option(t, 'file'):
> +            file = parser.get(t, 'file')
> +
> +        if file is None:
> +            raise error.TestError('Unittest config file %s has section %s but '
> +                                  'no mandatory option file.' %
> +                                  (unittest_cfg, t))
> +
> +        if file not in unittest_list:
> +            raise error.TestError('Unittest file %s referenced in config file '
> +                                  '%s but was not find under the unittest dir' %
> +                                  (file, unittest_cfg))
> +
> +        smp = None
> +        if parser.has_option(t, 'smp'):
> +            smp = int(parser.get(t, 'smp'))
> +
> +        extra_params = None
> +        if parser.has_option(t, 'extra_params'):
> +            extra_params = parser.get(t, 'extra_params')
> +
> +        vm_name = params.get("main_vm")
> +        testlog_path = os.path.join(test.debugdir, "%s.log" % t)
> +
> +        params['kernel'] = os.path.join(unittest_dir, file)
> +        logging.info('Running %s', t)
> +
> +        if smp is not None:
> +            params['smp'] = smp
> +            logging.info('SMP: %s', smp)
> +
> +        if extra_params is not None:
> +            params['extra_params'] = extra_params
> +            logging.info('Extra params: %s', extra_params)

1. Why not just use

params.update(parser.items(t))

instead of hardcoding 'smp', 'extra_params' etc.  It seems more flexible
but maybe I'm missing something.

2. Why not keep only 'extra_params' and dump the other keys?  The cfg
file is qemu-kvm version controlled, so it makes sense to give kvm
developers direct control over the command line.  The parameter can be
named 'args' or 'cmdline' or something like that.  If we do that, we
also have to make sure that nothing unnecessary is added to the command
line by default in VM.create() (e.g. -smp should be added only if the
user sets 'smp' to some value).  The required stuff will be added as
usual (qemu binary, monitor, serial console, testdev).  What do you
think?  If any kvm developer is reading this, please let us know if you
prefer to control the command line directly or via kvm-autotest parameters.

> +        try:
> +            try:
> +                kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
> +                vm = kvm_utils.env_get_vm(env, vm_name)

I'm not sure preprocess_vm() is better than just vm.create().
preprocess_vm() does unnecessary stuff in this context.
Also, this would be the 2nd time preprocess_vm() is called, so in fact
what you get is a VM that is started by the first preprocess_vm() and
then restarted by the 2nd preprocess_vm().
To avoid that we can set 'start_vm = no' in unittests.cfg.sample (the
kvm-autotest one).

> +                vm.monitor.cmd("cont")
> +                logging.info("Waiting for unittest %s to complete, timeout %s, "
> +                             "output in %s", t, timeout,
> +                             vm.get_testlog_filename())
> +                if not kvm_utils.wait_for(vm.is_dead, timeout):
> +                    raise error.TestFail("Timeout elapsed (%ss)" % timeout)
> +                # Check qemu's exit status
> +                status = vm.process.get_status()
> +                if status != 0:
> +                    nfail += 1
> +                    tests_failed.append(t)
> +                    logging.error("Unit test %s failed", t)
> +            except Exception, e:
> +                nfail += 1
> +                logging.error('Exception happened during %s: %s', t, str(e))
> +        finally:
> +            try:
> +                shutil.copy(vm.get_testlog_filename(), testlog_path)

Maybe we should use os.link() before starting the test.  That way if the
host dies during the test the result dir will already have a hard link
to the testlog.

> +                logging.info("Unit test log collected and available under %s",
> +                             testlog_path)
> +            except NameError, IOError:
> +                logging.error("Not possible to collect logs")
> +
> +    if nfail != 0:
> +        raise error.TestFail("Unit tests failed: %s" % " ".join(tests_failed))

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux