Re: [PATCH 3/3] Introduce QMP basic test-suite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2010-10-18 at 12:55 -0200, Luiz Capitulino wrote:
> On Mon, 18 Oct 2010 09:42:47 -0200
> Lucas Meneghel Rodrigues <lmr@xxxxxxxxxx> wrote:
> 
> > On Fri, 2010-10-15 at 17:52 -0300, Luiz Capitulino wrote:
> > > This commit introduces a suite which checks that QMP conforms to its
> > > specification (which is file QMP/qmp-spec.txt in QEMU's source tree).
> > > 
> > > It's important to note that this suite does _not_ do command or
> > > asynchronous messages testing, as each command or asynchronous message
> > > has its own specification, and thus should be tested separately.
> > > 
> > > This suite is limited to test that the basic protocol works as specified,
> > > that is, to ensure that the greeting message, success and error responses
> > > contain the basic information the spec says they do. Additionally,
> > > several errors conditions at the protocol level are also tested.
> > 
> > This is a very good test quality wise, thanks for the effort writing it!
> 
> Thanks.
> 
> > > Signed-off-by: Luiz Capitulino <lcapitulino@xxxxxxxxxx>
> > > ---
> > >  client/tests/kvm/tests/qmp_basic.py    |  374 ++++++++++++++++++++++++++++++++
> > >  client/tests/kvm/tests_base.cfg.sample |    3 +
> > >  2 files changed, 377 insertions(+), 0 deletions(-)
> > >  create mode 100644 client/tests/kvm/tests/qmp_basic.py
> > > 
> > > diff --git a/client/tests/kvm/tests/qmp_basic.py b/client/tests/kvm/tests/qmp_basic.py
> > > new file mode 100644
> > > index 0000000..21198ef
> > > --- /dev/null
> > > +++ b/client/tests/kvm/tests/qmp_basic.py
> > > @@ -0,0 +1,374 @@
> > > +import kvm_test_utils
> > > +from autotest_lib.client.common_lib import error
> > > +
> > > +def run_qmp_basic(test, params, env):
> > > +    """
> > > +    QMP Specification test-suite: this checks if the *basic* protocol conforms
> > > +    to its specification, which is file QMP/qmp-spec.txt in QEMU's source tree.
> > > +
> > > +    IMPORTANT NOTES:
> > > +
> > > +        o Most tests depend heavily on QMP's error information (eg. classes),
> > > +          this might have bad implications as the error interface is going to
> > > +          change in QMP
> > 
> > ^ I was thinking about this, perhaps in the future, when the error
> > interface does change, this test can be extended to do check the
> > capabilities of the monitor and act differently according to the results
> > of the check.
> 
> Yeah, we'll need something like this.
> 
> > > +        o Command testing is *not* covered in this suite. Each command has its
> > > +          own specification and should be tested separately
> > > +
> > > +        o We use the same terminology as used by the QMP specification,
> > > +          specially with regard to JSON types (eg. a Python dict is called
> > > +          a json-object)
> > > +
> > > +        o This is divided in sub test-suites, please check the bottom of this
> > > +          file to check the order in which they are run
> > > +
> > > +    TODO:
> > > +
> > > +        o Finding which test failed is not as easy as it should be
> > 
> > ^ What I usually do is to keep a failure dictionary/list that contains
> > all tests that failed at a given time, and allways append new errors to
> > it when they happen. In the end of the test, when raising
> > error.TestError, this contextual info can be added to the exception.
> 
> Looks like a good idea, but I think this should be part of the autotest API.
> 
> Even better, I really think that we should introduce the notion of test suite
> and test cases and provide appropriate assert methods.
> 
> A good way to do this would be to use Python's unittest module, this would
> bring a number of important features I miss in kvm-autotest:
> 
>  o test fixture
>  o The notion of test suite and test case: helps keeping things separate and
>    identifying which test has failed
>  o Ability to run tests from the command-line
>  o Good assert methods
>  o Sane output
> 
> The problem with kvm-autotest output today is that, it's more about debugging
> kvm-autotest then reporting appropriate test results.
> 
> I know that there are scripts to make a better report, but this looks like a
> workaround to me. Take a look at C check's output:
> 
> Success:
> 
> ~/src/qmp-unstable (mon-save-transfers-v0)/ ./check-qint 
> Running suite(s): QInt test-suite
> 100%: Checks: 5, Failures: 0, Errors: 0
> ~/src/qmp-unstable (master)/
> 
> Failure:
> 
> ~/src/qmp-unstable (master *)/ ./check-qint 
> Running suite(s): QInt test-suite
> 80%: Checks: 5, Failures: 1, Errors: 0
> check-qint.c:64:F:Public Interface:qint_get_int_test:0: Assertion 'qint_get_int(qi) == 2' failed
> ~/src/qmp-unstable (master *)/ 
> 
> Python's unittest module output is very similar.
> 
> I don't think we should have the exactly same output, but IMO the current one
> is only useful to know that kvm-autotest is not stuck (which is bad from a
> usability perspective).

Ok, fair points. This is something that might need to be addressed at
the autotest level, so better thinking should be put into this. I don't
expect to see it fixed overnight precisely because this should be done
carefully.

> > 
> > > +        o Are all those check_*() functions really needed? Wouldn't a
> > > +          specialized class (eg. a Response class) do better?
> > 
> > ^ It is an interesting idea, we could wrap all the checks on a Response
> > class. Also, it seems to me that we could have a better implementation
> > for checking with assertions, such as:
> > 
> > if not isinstance(qmp_dict, dict):
> >     raise error.TestFail(...
> > 
> > assert isinstance(qmp_dict, dict), "qmp_dict is not a dict...
> > 
> > And during the actual test, run the tests in a try/except block that
> > traps AssertionErrors and turns them into error.TestFails
> 
> Can we consider this a future improvement?
> 
> The current series is very useful as is and I'd like to get it merged ASAP.

^ Absolutely, that is fine by me.

> > > +    """
> > > +    def fail_no_key(qmp_dict, key):
> > > +        if not isinstance(qmp_dict, dict):
> > > +            raise error.TestFail("qmp_dict is not a dict (it's '%s')" % type(qmp_dict))
> > > +        if not key in qmp_dict:
> > > +            raise error.TestFail("'%s' key doesn't exist in dict ('%s')"
> > > +                                 % (key, str(qmp_dict)))
> > > +
> > > +    def check_dict_key(qmp_dict, key, keytype):
> > > +        """
> > > +        Performs the following checks on a QMP dict key:
> > > +
> > > +        1. qmp_dict is a dict
> > > +        2. key exists in qmp_dict
> > > +        3. key is of type keytype
> > > +        
> > > +        If any of these checks fails, error.TestFail is raised.
> > > +        """
> > > +        fail_no_key(qmp_dict, key)
> > > +        if not isinstance(qmp_dict[key], keytype):
> > > +            raise error.TestFail("'%s' key is not of type '%s', it's '%s'"
> > > +                                 % (key, keytype, type(qmp_dict[key])))
> > > +
> > > +    def check_key_is_dict(qmp_dict, key):
> > > +        check_dict_key(qmp_dict, key, dict)
> > > +
> > > +    def check_key_is_list(qmp_dict, key):
> > > +        check_dict_key(qmp_dict, key, list)
> > > +
> > > +    def check_key_is_str(qmp_dict, key):
> > > +        check_dict_key(qmp_dict, key, unicode)
> > > +
> > > +    def check_str_key(qmp_dict, keyname, value=None):
> > > +        check_dict_key(qmp_dict, keyname, unicode)
> > > +        if value and value != qmp_dict[keyname]:
> > > +            raise error.TestFail("'%s' key value '%s' should be '%s'"
> > > +                                 % (keyname, str(qmp_dict[keyname]), str(value)))
> > > +
> > > +    def check_key_is_int(qmp_dict, key):
> > > +        fail_no_key(qmp_dict, key)
> > > +        try:
> > > +            value = int(qmp_dict[key])
> > > +        except:
> > > +            raise error.TestFail("'%s' key is not of type int, it's '%s'"
> > > +                                 % (key, type(qmp_dict[key])))
> > > +
> > > +    def check_bool_key(qmp_dict, keyname, value=None):
> > > +        check_dict_key(qmp_dict, keyname, bool)
> > > +        if value and value != qmp_dict[keyname]:
> > > +            raise error.TestFail("'%s' key value '%s' should be '%s'"
> > > +                                 % (keyname, str(qmp_dict[keyname]), str(value)))
> > > +
> > > +    def check_success_resp(resp, empty=False):
> > > +        """
> > > +        Check QMP OK response.
> > > +
> > > +        @param resp: QMP response
> > > +        @param empty: if True, response should not contain data to return
> > > +        """
> > > +        check_key_is_dict(resp, "return")
> > > +        if empty and len(resp["return"]) > 0:
> > > +            raise error.TestFail("success response is not empty ('%s')"
> > > +                                 % str(resp))
> > 
> > ^ Here a little detail in this format string operation: We try to follow
> > PEP 8 as much as possible so the % operator should be in the same line
> > as the initial string, ie, this should have been written as:
> > 
> > raise error.TestFail("success response is not empty ('%s')" %
> >                      str(resp))
> > 
> > Please consider this for all other similar instances of this construct.
> 
> Will do and submit a v2.
> 
> > > +    def check_error_resp(resp, classname=None, datadict=None):
> > > +        """
> > > +        Check QMP error response.
> > > +
> > > +        @param resp: QMP response
> > > +        @param classname: Expected error class name
> > > +        @param datadict: Expected error data dictionary
> > > +        """
> > > +        check_key_is_dict(resp, "error")
> > > +        check_key_is_str(resp["error"], "class")
> > > +        if classname and resp["error"]["class"] != classname:
> > > +            raise error.TestFail("got error class '%s' expected '%s'"
> > > +                                 % (resp["error"]["class"], classname))
> > > +        check_key_is_dict(resp["error"], "data")
> > > +        if datadict and resp["error"]["data"] != datadict:
> > > +            raise error.TestFail("got data dict '%s' expected '%s'"
> > > +                                 % (resp["error"]["data"], datadict))
> > > +
> > > +    def test_version(version):
> > > +        """
> > > +        Check the QMP greeting message version key which, according to QMP's
> > > +        documentation, should be:
> > > +
> > > +        { "qemu": { "major": json-int, "minor": json-int, "micro": json-int }
> > > +          "package": json-string }
> > > +        """
> > > +        check_key_is_dict(version, "qemu")
> > > +        for key in [ "major", "minor", "micro" ]:
> > > +            check_key_is_int(version["qemu"], key)
> > > +        check_key_is_str(version, "package")
> > 
> > ^ Forgot to put 2 lines between the 2 methods
> 
> Will fix.
> 
> > > +    def test_greeting(greeting):
> > > +        check_key_is_dict(greeting, "QMP")
> > > +        check_key_is_dict(greeting["QMP"], "version")
> > > +        check_key_is_list(greeting["QMP"], "capabilities")
> > 
> > ^ Same here
> > 
> > > +    def greeting_suite(monitor):
> > 
> > ^ About the suites: As they all get a monitor as a parameter, it seems
> > better to write them as classes, that get a monitor parameter on its
> > constructor, then all the suite functions are methods of that class.
> 
> I considered doing that, but then I thought that the best way would be
> to have a 'real' notion of test suite and test cases, which misses in
> kvm-autotest today.

Ok, fair enough. So when you can, send me the v2 and I'll do a last
check, test it a bit and then get it upstream.

Thank you!

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux