On 28.09.2016 00:53, John Ferlan wrote: > > > On 09/27/2016 04:17 AM, Nikolay Shirokovskiy wrote: >> >> >> On 26.09.2016 23:07, John Ferlan wrote: >>> >>> >>> On 09/07/2016 05:24 AM, Nikolay Shirokovskiy wrote: >>>> --- >>>> src/qemu/qemu_domain.h | 1 + >>>> src/qemu/qemu_monitor_json.c | 17 +++++++++++++++++ >>>> tests/qemumonitorjsontest.c | 36 ++++++++++++++++++++++++++++++++++++ >>>> 3 files changed, 54 insertions(+) >>>> >>>> diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h >>>> index 13c0372..ea57111 100644 >>>> --- a/src/qemu/qemu_domain.h >>>> +++ b/src/qemu/qemu_domain.h >>>> @@ -337,6 +337,7 @@ struct qemuDomainDiskInfo { >>>> bool tray_open; >>>> bool empty; >>>> int io_status; >>>> + unsigned long long guest_size; >>> >>> a/k/a: 'virtual-size' in qemu and 'capacity' in libvirt >>> >>> I probably would stick with capacity or something that says "disk" >>> rather than "guest". >>> >>>> }; >>>> >>> >>> Trying to determine/decide whether this patch should be "separated" and >>> perhaps made usable with/for the existing callers or whether patch 3 >>> should use the block stats (qemuDomainGetStatsBlock) which already >>> handles some details that could be missing here (at least w/r/t backing >>> chains). >> >> I donno, does backing chain changes anything in this case? I need virtual >> size of top image only and anyway is it possible for images of backing >> chain to have different virtual sizes? It does not make much sense. > > I agree... I don't have in depth knowledge of how the backing chains > work, so it's more of a question. Hopefully someone that understands > that logic (pkrempa? eblake?) would be able to be more definitive. > > I just wouldn't want something to be missed if backing chains were required. > >> >> qemuDomainGetStatsBlock (and more specifically qemuMonitorBlockStatsUpdateCapacity >> which is more suited to get just size) has rather inconviniet arguments >> more suited for many disks requests. >> > > There's lots of similarities that I see in that code... I spent some > time today working through a mechanism to make one "query-block" call > that 3 consumers (BlockInfo, BlockStats, and DiskNameLookup) could use > (I have it working nominally at least). I would think this could then be > used to more easily add a new lookup function that's only returning the > data you want rather than putting stats data in an BlockInfo... > >>> >>> Another option is adjusting the qemuMonitorJSONDiskNameLookup code to be >>> more multi-purpose since it's using the "query-block" for >>> *BlockPullCommon (from Rebase and Pull) as well as from *BlockCommit and >>> it already checks/expects both "inserted" and "image" to be present in >>> order to return that name. >> >> I would not make function with such specific name provide additionally size info... >> I think it is perfectily fine to have many monitor commands that internally >> use same 'query-block' qemu command because this command is really profound) >> > > Note that I didn't say add a new param/return - I said make it more > multi-purpose with a subtly implied inference that perhaps a function > name change and a different return value (or local/static structure) > could work as well (and hence why I tried to combine things today to see > what was possible). > >>> >>> >>>> typedef struct _qemuDomainHostdevPrivate qemuDomainHostdevPrivate; >>>> diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c >>>> index 3d0a120..7903b47 100644 >>>> --- a/src/qemu/qemu_monitor_json.c >>>> +++ b/src/qemu/qemu_monitor_json.c >>>> @@ -1800,6 +1800,8 @@ int qemuMonitorJSONGetBlockInfo(qemuMonitorPtr mon, >>>> >>>> for (i = 0; i < virJSONValueArraySize(devices); i++) { >>>> virJSONValuePtr dev = virJSONValueArrayGet(devices, i); >>>> + virJSONValuePtr inserted; >>>> + virJSONValuePtr image; >>>> struct qemuDomainDiskInfo *info; >>>> const char *thisdev; >>>> const char *status; >>>> @@ -1854,6 +1856,21 @@ int qemuMonitorJSONGetBlockInfo(qemuMonitorPtr mon, >>>> if (info->io_status < 0) >>>> goto cleanup; >>>> } >>>> + >>>> + if ((inserted = virJSONValueObjectGetObject(dev, "inserted"))) { >>>> + if (!(image = virJSONValueObjectGetObject(inserted, "image"))) { >>>> + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", >>>> + _("cannot read 'inserted/image' value")); >>>> + goto cleanup; >>>> + } >>> >>> Other code that checks "inserted" and "image" doesn't fail - it just >>> ignores. If there's only going to be one consumer, then I don't believe >>> we want a failure such as this to affect other callers that may not >>> care. That could mean having some sort of "marker" in the returned >>> buffer that the fetch of "virtual-size" did occur (or just using not >>> zero). It would be a shame to have some unexpected failures for a field >>> that nothing else uses especially since the *GetBlockInfo is being used >>> during process launch and reconnect (via qemuProcessRefreshDisks). >>> >> >> I doubt there can be 'inserted' without 'image', it doesn't make sense, > > True, but I'm concerned that by adding an error path to existing code > there's some 'strange' path or device that previously wasn't causing an > error that now would be. It's the paranoia of this code. I think eblake > has extensive knowledge of the QEMU side/layout. > >> inserted what?)) I guess qemuDomainGetStatsBlock ignores missing 'image' >> because it can afford it, while qemuMonitorJSONDiskNameLookup does not >> really ignores, if name is not found then function return error eventually. >> So if this is true, there should be no 'inserted' without 'image', then shame will go >> to qemu) >> >>> Futhermore, what if there's a "backing-image" for "this" particular >>> path? Will that be important for the pull backups support? Without >>> looking at patch 3 I would think so... >> >> Not really. Backing chain has nothing to do with backup, there is just no >> difference from backup POV. >> > > Maybe it's my novice/naive knowledge of backing chains, but if you're > taking a backing and some data was in the backing-chain would it be lost > or does the API you're calling handle that for you. Again, I don't have Backup sees disk just like guest, block device nothing more, no any backing chain structure. > a lot of exposure to that code. I do know there's still upstream qemu > work taking place. We really should be very careful about adding > anything to libvirt before the upstream work is done. In particular any > x-* command support. > AFAIK full pull backups as well as full/incremennal push backups are ready to use in qemu. It is a pity that other commands that helps organize pull backup process still has experimental status. One day they drop this x- and I doubt that things will changed very radically in comparison to what we have today. So if with the help of community this patch will be ironed out libvirt will get pull backups instantly after qemu be full ready. Nikolay -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list