Re: [PATCH 14/24] qemu: block: Refactor node name detection code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/26/2017 05:00 AM, Peter Krempa wrote:
> Remove the complex and unreliable code which inferred the node name
> hierarchy only from data returned by 'query-named-block-nodes'. It turns
> out that query-blockstats contain the full hierarchy of nodes as
> perceived by qemu so the inference code is not necessary.
> 
> In query blockstats, the 'parent' object corresponds to the storage
> behind a storage volume and 'backing' corresponds to the lower level of
> backing chain. Since all have node names this data can be really easily
> used to detect node names.
> 
> In addition to the code refactoring the one remaining test case needed
> to be fixed along.

The diff is hard to read given how much changed; it's easier to just
focus on the code additions to see if the new stuff makes sense,
ignoring what used to be there.

> ---
>  src/qemu/qemu_block.c                              | 290 ++++++++-------------
>  src/qemu/qemu_block.h                              |  10 +-
>  .../qemumonitorjson-nodename-basic-blockstats.json | 166 ++++++++++++
>  ...qemumonitorjson-nodename-basic-named-nodes.json |  18 +-
>  .../qemumonitorjson-nodename-basic.result          |  12 +-
>  tests/qemumonitorjsontest.c                        |  22 +-
>  6 files changed, 313 insertions(+), 205 deletions(-)
>  create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-nodename-basic-blockstats.json
> 

> -
> +/* list of driver names of layers that qemu automatically adds into the
> + * backing chain */
> +static const char *qemuBlockDriversBlockjob[] = {
> +    "mirror_top", "commit_top", NULL };

This list may change over time; also, qemu 2.10 will hide these implicit
nodes (or did 2.9 expose them, so we are stuck dealing with them?)

> +
> +        /* qemu 2.9 reports layers in the backing chain which don't correspond
> +         * to files. skip them */
> +        if (qemuBlockDriverMatch(drvname, qemuBlockDriversBlockjob)) {

And that answers my question.


> +qemuBlockNodeNameGetBackingChainDisk(size_t pos ATTRIBUTE_UNUSED,
> +                                     virJSONValuePtr item,
> +                                     void *opaque)

> +    if (qemuBlockNodeNameGetBackingChainBacking(item, data->nodenamestable,
> +                                                &devicedata) < 0)

Sounds like we're stuttering on Backing; but I guess it makes sense (we
are doing pieces of 'GetBackingChain', with one piece getting the disk
and the other getting the next backing level).


>  /**
>   * qemuBlockNodeNameGetBackingChain:
> - * @json: JSON array of data returned from 'query-named-block-nodes'
> + * @namednodes: JSON array of data returned from 'query-named-block-nodes'
> + * @blockstats: JSON array of data returned from 'query-blockstats'

So we really have to query two pieces of information, then thread them
together (but at least the threading is easy to do, now that qemu
auto-names every node); definitely better than our old code of
reconstructing the threading by filename guesses.  But it also goes to
show why Kevin is working on a new query-block-something for 2.11 that
will be less painful; so while we'll have to keep this cruft for as long
as we support 2.9/2.10, we'll want to give feedback to Kevin's
improvements that will let us rebuild things in less time.  Or even
better, we reach the point where we assign all node names ourselves, and
don't even have to do reconstructive queries.  But that's for down the
road; now back to reviewing this patch.

> +++ b/tests/qemumonitorjsondata/qemumonitorjson-nodename-basic-named-nodes.json
> @@ -21,7 +21,7 @@
>        },
>        "iops_wr": 0,
>        "ro": true,
> -      "node-name": "#block567",
> +      "node-name": "#block558",

Trivia: qemu randomizes the last two digits (so you HAVE to query rather
than making hard-coded assumptions about what node name will be
auto-assigned); but the leading digits are merely a counter of how many
nodes qemu previously had to name (this node thus had 5 preceding nodes)
- so having the same #block5XX means you created your new trace using
the same steps of node creation as the old trace ;)

> -    DO_TEST_BLOCK_NODE_DETECT("basic", "#block118");
> +    DO_TEST_BLOCK_NODE_DETECT("basic", "drive-virtio-disk0");

The real payout of your new scheme: you get to start your query from a
block device name (under libvirt control) rather than from an
intermediate generated node name!

I didn't spot anything blatantly wrong, and the new code DOES seem less
convoluted (yeah, we have to bounce between two arrays to piece things
together, but we aren't reconstructing threading by hand), and, as the
commit message says, you're now able to handle a case that previously
failed without the use of query-blockstats.  I'd still like to test
things once I've gone over the whole series, but if I get through my
review to 24/24 without any further red flags, you can add:

Reviewed-by: Eric Blake <eblake@xxxxxxxxxx>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux