On Mon, 18 Oct 2010 09:42:47 -0200 Lucas Meneghel Rodrigues <lmr@xxxxxxxxxx> wrote: > On Fri, 2010-10-15 at 17:52 -0300, Luiz Capitulino wrote: > > This commit introduces a suite which checks that QMP conforms to its > > specification (which is file QMP/qmp-spec.txt in QEMU's source tree). > > > > It's important to note that this suite does _not_ do command or > > asynchronous messages testing, as each command or asynchronous message > > has its own specification, and thus should be tested separately. > > > > This suite is limited to test that the basic protocol works as specified, > > that is, to ensure that the greeting message, success and error responses > > contain the basic information the spec says they do. Additionally, > > several errors conditions at the protocol level are also tested. > > This is a very good test quality wise, thanks for the effort writing it! Thanks. > > Signed-off-by: Luiz Capitulino <lcapitulino@xxxxxxxxxx> > > --- > > client/tests/kvm/tests/qmp_basic.py | 374 ++++++++++++++++++++++++++++++++ > > client/tests/kvm/tests_base.cfg.sample | 3 + > > 2 files changed, 377 insertions(+), 0 deletions(-) > > create mode 100644 client/tests/kvm/tests/qmp_basic.py > > > > diff --git a/client/tests/kvm/tests/qmp_basic.py b/client/tests/kvm/tests/qmp_basic.py > > new file mode 100644 > > index 0000000..21198ef > > --- /dev/null > > +++ b/client/tests/kvm/tests/qmp_basic.py > > @@ -0,0 +1,374 @@ > > +import kvm_test_utils > > +from autotest_lib.client.common_lib import error > > + > > +def run_qmp_basic(test, params, env): > > + """ > > + QMP Specification test-suite: this checks if the *basic* protocol conforms > > + to its specification, which is file QMP/qmp-spec.txt in QEMU's source tree. > > + > > + IMPORTANT NOTES: > > + > > + o Most tests depend heavily on QMP's error information (eg. classes), > > + this might have bad implications as the error interface is going to > > + change in QMP > > ^ I was thinking about this, perhaps in the future, when the error > interface does change, this test can be extended to do check the > capabilities of the monitor and act differently according to the results > of the check. Yeah, we'll need something like this. > > + o Command testing is *not* covered in this suite. Each command has its > > + own specification and should be tested separately > > + > > + o We use the same terminology as used by the QMP specification, > > + specially with regard to JSON types (eg. a Python dict is called > > + a json-object) > > + > > + o This is divided in sub test-suites, please check the bottom of this > > + file to check the order in which they are run > > + > > + TODO: > > + > > + o Finding which test failed is not as easy as it should be > > ^ What I usually do is to keep a failure dictionary/list that contains > all tests that failed at a given time, and allways append new errors to > it when they happen. In the end of the test, when raising > error.TestError, this contextual info can be added to the exception. Looks like a good idea, but I think this should be part of the autotest API. Even better, I really think that we should introduce the notion of test suite and test cases and provide appropriate assert methods. A good way to do this would be to use Python's unittest module, this would bring a number of important features I miss in kvm-autotest: o test fixture o The notion of test suite and test case: helps keeping things separate and identifying which test has failed o Ability to run tests from the command-line o Good assert methods o Sane output The problem with kvm-autotest output today is that, it's more about debugging kvm-autotest then reporting appropriate test results. I know that there are scripts to make a better report, but this looks like a workaround to me. Take a look at C check's output: Success: ~/src/qmp-unstable (mon-save-transfers-v0)/ ./check-qint Running suite(s): QInt test-suite 100%: Checks: 5, Failures: 0, Errors: 0 ~/src/qmp-unstable (master)/ Failure: ~/src/qmp-unstable (master *)/ ./check-qint Running suite(s): QInt test-suite 80%: Checks: 5, Failures: 1, Errors: 0 check-qint.c:64:F:Public Interface:qint_get_int_test:0: Assertion 'qint_get_int(qi) == 2' failed ~/src/qmp-unstable (master *)/ Python's unittest module output is very similar. I don't think we should have the exactly same output, but IMO the current one is only useful to know that kvm-autotest is not stuck (which is bad from a usability perspective). > > > + o Are all those check_*() functions really needed? Wouldn't a > > + specialized class (eg. a Response class) do better? > > ^ It is an interesting idea, we could wrap all the checks on a Response > class. Also, it seems to me that we could have a better implementation > for checking with assertions, such as: > > if not isinstance(qmp_dict, dict): > raise error.TestFail(... > > assert isinstance(qmp_dict, dict), "qmp_dict is not a dict... > > And during the actual test, run the tests in a try/except block that > traps AssertionErrors and turns them into error.TestFails Can we consider this a future improvement? The current series is very useful as is and I'd like to get it merged ASAP. > > + """ > > + def fail_no_key(qmp_dict, key): > > + if not isinstance(qmp_dict, dict): > > + raise error.TestFail("qmp_dict is not a dict (it's '%s')" % type(qmp_dict)) > > + if not key in qmp_dict: > > + raise error.TestFail("'%s' key doesn't exist in dict ('%s')" > > + % (key, str(qmp_dict))) > > + > > + def check_dict_key(qmp_dict, key, keytype): > > + """ > > + Performs the following checks on a QMP dict key: > > + > > + 1. qmp_dict is a dict > > + 2. key exists in qmp_dict > > + 3. key is of type keytype > > + > > + If any of these checks fails, error.TestFail is raised. > > + """ > > + fail_no_key(qmp_dict, key) > > + if not isinstance(qmp_dict[key], keytype): > > + raise error.TestFail("'%s' key is not of type '%s', it's '%s'" > > + % (key, keytype, type(qmp_dict[key]))) > > + > > + def check_key_is_dict(qmp_dict, key): > > + check_dict_key(qmp_dict, key, dict) > > + > > + def check_key_is_list(qmp_dict, key): > > + check_dict_key(qmp_dict, key, list) > > + > > + def check_key_is_str(qmp_dict, key): > > + check_dict_key(qmp_dict, key, unicode) > > + > > + def check_str_key(qmp_dict, keyname, value=None): > > + check_dict_key(qmp_dict, keyname, unicode) > > + if value and value != qmp_dict[keyname]: > > + raise error.TestFail("'%s' key value '%s' should be '%s'" > > + % (keyname, str(qmp_dict[keyname]), str(value))) > > + > > + def check_key_is_int(qmp_dict, key): > > + fail_no_key(qmp_dict, key) > > + try: > > + value = int(qmp_dict[key]) > > + except: > > + raise error.TestFail("'%s' key is not of type int, it's '%s'" > > + % (key, type(qmp_dict[key]))) > > + > > + def check_bool_key(qmp_dict, keyname, value=None): > > + check_dict_key(qmp_dict, keyname, bool) > > + if value and value != qmp_dict[keyname]: > > + raise error.TestFail("'%s' key value '%s' should be '%s'" > > + % (keyname, str(qmp_dict[keyname]), str(value))) > > + > > + def check_success_resp(resp, empty=False): > > + """ > > + Check QMP OK response. > > + > > + @param resp: QMP response > > + @param empty: if True, response should not contain data to return > > + """ > > + check_key_is_dict(resp, "return") > > + if empty and len(resp["return"]) > 0: > > + raise error.TestFail("success response is not empty ('%s')" > > + % str(resp)) > > ^ Here a little detail in this format string operation: We try to follow > PEP 8 as much as possible so the % operator should be in the same line > as the initial string, ie, this should have been written as: > > raise error.TestFail("success response is not empty ('%s')" % > str(resp)) > > Please consider this for all other similar instances of this construct. Will do and submit a v2. > > + def check_error_resp(resp, classname=None, datadict=None): > > + """ > > + Check QMP error response. > > + > > + @param resp: QMP response > > + @param classname: Expected error class name > > + @param datadict: Expected error data dictionary > > + """ > > + check_key_is_dict(resp, "error") > > + check_key_is_str(resp["error"], "class") > > + if classname and resp["error"]["class"] != classname: > > + raise error.TestFail("got error class '%s' expected '%s'" > > + % (resp["error"]["class"], classname)) > > + check_key_is_dict(resp["error"], "data") > > + if datadict and resp["error"]["data"] != datadict: > > + raise error.TestFail("got data dict '%s' expected '%s'" > > + % (resp["error"]["data"], datadict)) > > + > > + def test_version(version): > > + """ > > + Check the QMP greeting message version key which, according to QMP's > > + documentation, should be: > > + > > + { "qemu": { "major": json-int, "minor": json-int, "micro": json-int } > > + "package": json-string } > > + """ > > + check_key_is_dict(version, "qemu") > > + for key in [ "major", "minor", "micro" ]: > > + check_key_is_int(version["qemu"], key) > > + check_key_is_str(version, "package") > > ^ Forgot to put 2 lines between the 2 methods Will fix. > > + def test_greeting(greeting): > > + check_key_is_dict(greeting, "QMP") > > + check_key_is_dict(greeting["QMP"], "version") > > + check_key_is_list(greeting["QMP"], "capabilities") > > ^ Same here > > > + def greeting_suite(monitor): > > ^ About the suites: As they all get a monitor as a parameter, it seems > better to write them as classes, that get a monitor parameter on its > constructor, then all the suite functions are methods of that class. I considered doing that, but then I thought that the best way would be to have a 'real' notion of test suite and test cases, which misses in kvm-autotest today. > > + """ > > + Check the greeting message format, as described in the QMP > > + specfication section '2.2 Server Greeting'. > > + > > + { "QMP": { "version": json-object, "capabilities": json-array } } > > + """ > > + greeting = monitor.get_greeting() > > + test_greeting(greeting) > > + test_version(greeting["QMP"]["version"]) > > + > > + > > + def json_parsing_errors_suite(monitor): > > + """ > > + Check that QMP's parser is able to recover from parsing errors, please > > + check the JSON spec for more info on the JSON syntax (RFC 4627). > > + """ > > + # We're quite simple right now and the focus is on parsing errors that > > + # have already biten us in the past. > > ^ Excellent, regression suites are usually a great way to tell whether > functionalities have regressed or not :) > > > + # TODO: The following test-cases are missing: > > + # > > + # - JSON numbers, strings and arrays > > + # - More invalid characters or malformed structures > > + # - Valid, but not obvious syntax, like zillion of spaces or > > + # strings with unicode chars (different suite maybe?) > > + bad_json = [] > > + > > + # A JSON value MUST be an object, array, number, string, true, false, > > + # or null > > + # > > + # NOTE: QMP seems to ignore a number of chars, like: | and ? > > + bad_json.append(":") > > + bad_json.append(",") > > + > > + # Malformed json-objects > > + # > > + # NOTE: sending only "}" seems to break QMP > > ^ Should we open a bug about it? Will do, I forgot. > > > + # NOTE: Duplicate keys are accepted (should it?) > > ^ It most likely shouldn't. Yes, the JSON standard uses the SHOULD keyword for this, so this conforms to the spec, we just need to be sure that this is what we want. > > > + bad_json.append("{ \"execute\" }") > > + bad_json.append("{ \"execute\": \"query-version\", }") > > + bad_json.append("{ 1: \"query-version\" }") > > + bad_json.append("{ true: \"query-version\" }") > > + bad_json.append("{ []: \"query-version\" }") > > + bad_json.append("{ {}: \"query-version\" }") > > + > > + for cmd in bad_json: > > + resp = monitor.send(cmd) > > + check_error_resp(resp, "JSONParsing") > > + > > + > > + def test_id_key(monitor): > > + """ > > + Check that QMP's "id" key is correctly handled. > > + """ > > + # The "id" key must be echoed back in error responses > > + id = "kvm-autotest" > > + resp = monitor.cmd_qmp("eject", { "foobar": True }, id=id) > > + check_error_resp(resp) > > + check_str_key(resp, "id", id) > > + > > + # The "id" key must be echoed back in success responses > > + resp = monitor.cmd_qmp("query-status", id=id) > > + check_success_resp(resp) > > + check_str_key(resp, "id", id) > > + > > + # The "id" key can be any json-object > > + for id in [ True, 1234, "string again!", [1, [], {}, True, "foo"], > > + { "key": {} } ]: > > + resp = monitor.cmd_qmp("query-status", id=id) > > + check_success_resp(resp) > > + if resp["id"] != id: > > + raise error.TestFail("expected id '%s' but got '%s'" > > + % (str(id), str(resp["id"]))) > > + > > + def test_invalid_arg_key(monitor): > > + """ > > + Currently, the only supported keys in the input object are: "execute", > > + "arguments" and "id". Although expansion is supported, invalid key > > + names must be detected. > > + """ > > + resp = monitor.cmd_obj({ "execute": "eject", "foobar": True }) > > + check_error_resp(resp, "QMPExtraInputObjectMember", > > + { "member": "foobar" }) > > + > > + def test_bad_arguments_key_type(monitor): > > + """ > > + The "arguments" key must be an json-object. > > + > > + We use the eject command to perform the tests, but that's a random > > + choice, any command that accepts arguments will do, as the command > > + doesn't get called. > > + """ > > + for item in [ True, [], 1, "foo" ]: > > + resp = monitor.cmd_obj({ "execute": "eject", "arguments": item }) > > + check_error_resp(resp, "QMPBadInputObjectMember", > > + { "member": "arguments", "expected": "object" }) > > + > > + def test_bad_execute_key_type(monitor): > > + """ > > + The "execute" key must be a json-string. > > + """ > > + for item in [ False, 1, {}, [] ]: > > + resp = monitor.cmd_obj({ "execute": item }) > > + check_error_resp(resp, "QMPBadInputObjectMember", > > + { "member": "execute", "expected": "string" }) > > + > > + def test_no_execute_key(monitor): > > + """ > > + The "execute" key must exist, we also test for some stupid parsing > > + errors. > > + """ > > + for cmd in [ {}, { "execut": "qmp_capabilities" }, > > + { "executee": "qmp_capabilities" }, { "foo": "bar" }]: > > + resp = monitor.cmd_obj(cmd) > > + check_error_resp(resp) # XXX: check class and data dict? > > + > > + def test_bad_input_obj_type(monitor): > > + """ > > + The input object must be... an json-object. > > + """ > > + for cmd in [ "foo", [], True, 1 ]: > > + resp = monitor.cmd_obj(cmd) > > + check_error_resp(resp, "QMPBadInputObject", { "expected":"object" }) > > + > > + def test_good_input_obj(monitor): > > + """ > > + Basic success tests for issuing QMP commands. > > + """ > > + # NOTE: We don't use the cmd_qmp() method here because the command > > + # object is in a 'random' order > > + resp = monitor.cmd_obj({ "execute": "query-version" }) > > + check_success_resp(resp) > > + > > + resp = monitor.cmd_obj({ "arguments": {}, "execute": "query-version" }) > > + check_success_resp(resp) > > + > > + id = "1234foo" > > + resp = monitor.cmd_obj({ "id": id, "execute": "query-version", > > + "arguments": {} }) > > + check_success_resp(resp) > > + check_str_key(resp, "id", id) > > + > > + # TODO: would be good to test simple argument usage, but we don't have > > + # a read-only command that accepts arguments. > > + > > + def input_object_suite(monitor): > > + """ > > + Check the input object format, as described in the QMP specfication > > + section '2.3 Issuing Commands'. > > + > > + { "execute": json-string, "arguments": json-object, "id": json-value } > > + """ > > + test_good_input_obj(monitor) > > + test_bad_input_obj_type(monitor) > > + test_no_execute_key(monitor) > > + test_bad_execute_key_type(monitor) > > + test_bad_arguments_key_type(monitor) > > + test_id_key(monitor) > > + test_invalid_arg_key(monitor) > > + > > + def argument_checker_suite(monitor): > > + """ > > + Check that QMP's argument checker is detecting all possible errors. > > + > > + We use a number of different commands to perform the checks, but the > > + command used doesn't matter much as QMP performs argument checking > > + _before_ calling the command. > > + """ > > + # stop doesn't take arguments > > + resp = monitor.cmd_qmp("stop", { "foo": 1 }) > > + check_error_resp(resp, "InvalidParameter", { "name": "foo" }) > > + > > + # required argument omitted > > + resp = monitor.cmd_qmp("screendump") > > + check_error_resp(resp, "MissingParameter", { "name": "filename" }) > > + > > + # 'bar' is not a valid argument > > + resp = monitor.cmd_qmp("screendump", { "filename": "outfile", > > + "bar": "bar" }) > > + check_error_resp(resp, "InvalidParameter", { "name": "bar"}) > > + > > + # test optional argument: 'force' is omitted, but it's optional, so > > + # the handler has to be called. Test this happens by checking an > > + # error that is generated by the handler itself. > > + resp = monitor.cmd_qmp("eject", { "device": "foobar" }) > > + check_error_resp(resp, "DeviceNotFound") > > + > > + # filename argument must be a json-string > > + for arg in [ {}, [], 1, True ]: > > + resp = monitor.cmd_qmp("screendump", { "filename": arg }) > > + check_error_resp(resp, "InvalidParameterType", > > + { "name": "filename", "expected": "string" }) > > + > > + # force argument must be a json-bool > > + for arg in [ {}, [], 1, "foo" ]: > > + resp = monitor.cmd_qmp("eject", { "force": arg, "device": "foo" }) > > + check_error_resp(resp, "InvalidParameterType", > > + { "name": "force", "expected": "bool" }) > > + > > + # val argument must ba a json-int > > ^ Typo in the comment > > > + for arg in [ {}, [], True, "foo" ]: > > + resp = monitor.cmd_qmp("memsave", { "val": arg, "filename": "foo", > > + "size": 10 }) > > + check_error_resp(resp, "InvalidParameterType", > > + { "name": "val", "expected": "int" }) > > + > > + # value argument must ba a json-number > > ^ Typo in the comment Will fix. > > > + for arg in [ {}, [], True, "foo" ]: > > + resp = monitor.cmd_qmp("migrate_set_speed", { "value": arg }) > > + check_error_resp(resp, "InvalidParameterType", > > + { "name": "value", "expected": "number" }) > > + > > + # qdev-type commands have their own argument checker, all QMP does > > + # is to skip its checking and pass arguments through. Check this > > + # works by providing invalid options to device_add and expecting > > + # an error message from qdev > > + resp = monitor.cmd_qmp("device_add", { "driver": "e1000","foo": "bar" }) > > + check_error_resp(resp, "PropertyNotFound", > > + {"device": "e1000", "property": "foo"}) > > + > > + > > + def unknown_commands_suite(monitor): > > + """ > > + Check that QMP handles unknown commands correctly. > > + """ > > + # We also call a HMP-only command, to be sure it will fail as expected > > + for cmd in [ "bar", "query-", "query-foo", "q", "help" ]: > > + resp = monitor.cmd_qmp(cmd) > > + check_error_resp(resp, "CommandNotFound", { "name": cmd }) > > + > > + > > + vm = kvm_test_utils.get_living_vm(env, params.get("main_vm")) > > + > > + # Run all suites > > + greeting_suite(vm.monitor) > > + input_object_suite(vm.monitor) > > + argument_checker_suite(vm.monitor) > > + unknown_commands_suite(vm.monitor) > > + json_parsing_errors_suite(vm.monitor) > > + > > + # check if QMP is still alive > > + if not vm.monitor.is_responsive(): > > + raise error.TestFail('QEMU is not alive after QMP testing') > > diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample > > index 769d750..2f6b077 100644 > > --- a/client/tests/kvm/tests_base.cfg.sample > > +++ b/client/tests/kvm/tests_base.cfg.sample > > @@ -456,6 +456,9 @@ variants: > > - fmt_raw: > > image_format_stg = raw > > > > + - qmp_basic: install setup unattended_install.cdrom > > + type = qmp_basic > > + > > - vlan: install setup unattended_install.cdrom > > type = vlan > > # subnet should not be used by host > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html