Re: [PATCHv2 12/16] storage: cache backing chain while qemu domain is live

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/16/2012 04:07 PM, Eric Blake wrote:
> On 10/16/2012 01:02 PM, Laine Stump wrote:
>> On 10/13/2012 06:00 PM, Eric Blake wrote:
>>> Technically, we should not be re-probing any file that qemu might
>>> be currently writing to.  As such, we should cache the backing
>>> file chain prior to starting qemu.  This patch adds the cache,
>>> but does not use it until the next patch.
>>>
>>> Ultimately, we want to also store the chain in domain XML, so that
>>> it is remembered across libvirtd restarts, and so that the only
>>> kosher way to modify the backing chain of an offline domain will be
>>> through libvirt API calls, but we aren't there yet.  So for now, we
>>> merely invalidate the cache any time we do a live operation that
>>> alters the chain (block-pull, block-commit, external disk snapshot),
>>> as well as tear down the cache when the domain is not running.
>>>
>>> +++ b/src/conf/domain_conf.c
>>> @@ -971,6 +971,7 @@ void virDomainDiskDefFree(virDomainDiskDefPtr def)
>>>      VIR_FREE(def->src);
>>>      VIR_FREE(def->dst);
>>>      VIR_FREE(def->driverName);
>>> +    virStorageFileFreeMetadata(def->chain);
>> Do you think this name is descriptive enough? (Probably for you, but for
>> someone looking at domain_conf.c for the first time...)
> Is diskdef->backingChain any better?

It sounds more informative to me anyway.

>
>>> +    if (!disk->chain && !force)
>>> +        return -1;
>>> +    return 0;
>>
>> I would naively think that if force was true, you would want to return
>> failure if you couldn't get the chain, you're returning failure only if
>> force is false. Since that seems counterintuitive to me, I thought I
>> should point it out (very likely it's correct and I again just don't
>> understand, but better safe than sorry :-)
> I was using 'force' more as a flag to invalidate any existing chain, and
> to optionally provide a new chain if convenient.  qemu_process.c was the
> only caller in this patch that passes force=true, and it doesn't care
> about the state of the chain afterwards.  I guess I can simplify this to
> just return -1 if !disk->chain, and then it is up to the caller that
> cares about having a valid chain to check for a 0 result to know if
> def->chain got populated.

If you think that the two things (forcing the invalidation of existing
chain, and caring whether or no the chain was populated at the end)
would ever be required in a different combination, then I suppose that
would be good. On the other hand, if what you've got works for the
current situation, it could always be enhanced later - it's not a public
API or anything. So I'm fine either way.


>
>>> +++ b/src/qemu/qemu_driver.c
>>> @@ -5705,6 +5705,9 @@ qemuDomainAttachDeviceDiskLive(virConnectPtr conn,
>>>          goto end;
>>>      }
>>>
>>> +    if (qemuDomainDetermineDiskChain(driver, disk, false) < 0)
>>> +        goto end;
>>> +
>> .. after all - here you have force = false, and expect that it might
>> fail, so I'm probably wrong...
> Passing force=false says that 'if there is already a chain cached,
> assume it is right and reuse it; so the only way I can fail is if there
> is nothing cached, but I couldn't read the chain'.

Okay.

>
>>> @@ -10677,6 +10683,14 @@ qemuDomainSnapshotCreateSingleDiskActive(struct qemud_driver *driver,
>>>      disk->src = source;
>>>      origdriver = disk->format;
>>>      disk->format = VIR_STORAGE_FILE_RAW; /* Don't want to probe backing files */
>>> +    /* XXX Here, we know we are about to alter disk->chain if
>>> +     * successful, so we nuke the existing chain so that future
>>> +     * commands will recompute it.  Better would be storing the chain
>>> +     * ourselves rather than reprobing, but this requires modifying
>>> +     * domain_conf and our XML to fully track the chain across
>>> +     * libvirtd restarts.  */
>> So are you avoiding this for now just to reduce the complexity of this
>> series? Or is there some other reason that it's not practical?
> Short answer - avoiding the complexity of this series.  I _do_ want to
> cache the entire backing chain, in both runtime and persistent XML, as a
> user-visible structure, but that's a much bigger task.  Hence the XXX
> notes to remind whoever gets to that task

Ooh! Hot potato!!!! :-)

>  to remember that this is one
> of the places where we have told qemu to modify the chain, and therefore
> we would need to reflect it in the XML we show to users.
>
>>> +    /* XXX For now, disk chains should only be cached while qemu is
>>> +     * running.  Since we don't track the chain in XML, a user is free
>>> +     * to update the chain while the domain is offline, and then when
>>> +     * they next boot the domain we should re-read the chain from the
>>> +     * files at that point in time.  Only when we track the chain in
>>> +     * XML can we forbid the user from altering the chain of an
>>> +     * offline domain.  */
>> Well, there are two possible levels of caching: 1) cache it in the
>> domain's status for as long as the domain is running (this would
>> alleviate the need to recompute it when libvirtd is restarted), 2) cache
>> it in the domain's persistent config, so that it survives restarts of
>> the domain. Is it used so much that either level of caching would
>> actually be a substantial gain?
> I'm not so sure about the gain for reduced number of open() calls, so
> much as the real flexibility in allowing the user to specify exactly
> which files qemu is allowed to open, and to visually see the entire
> chain at a glance (until this week's qemu.git, you couldn't even get a
> single 'qemu-img info' to show you an entire chain).  But that can come
> later; for now, I needed only enough chain information to make it
> possible to implement the block-commit --shallow flag and selinux labeling.
>
>>> +    for (i = 0; i < def->ndisks; i++) {
>>> +        virStorageFileFreeMetadata(def->disks[i]->chain);
>>> +        def->disks[i]->chain = NULL;
>>> +    }
>>> +
>> If you're storing the cache in vm->def rather than vm->newDef, shouldn't
>> this be taken care of automatically? (At domain startup in
>> qemuProcessStart(), virDomainObjSetDefTransient() is called, which makes
>> a copy of vm->def into vm->newDef, then at domain shutdown in
>> qemuProcessStop() (right at the bottom of the function), vm->def is
>> freed, and vm->newDef is moved into its place.
> Ooh, good point.  I'll simplify this code for v3.
>

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]