Re: [PATCHv3 19/19] blockjob: properly label disks for qemu block-commit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/18/2012 02:27 PM, Laine Stump wrote:
> On 10/17/2012 06:30 PM, Eric Blake wrote:
>> I finally have all the pieces in place to perform a block-commit with
>> SELinux enforcing.  There's still missing cleanup work when the commit
>> completes, but doing that requires tracking both the backing chain and
>> the base and top files within that chain in domain XML across libvirtd
>> restarts.  Furthermore, from a security standpoint, once you have
>> granted access, you must assume any damage that can be done will be
>> done; later revoking access is nice to minimize the window of damage,
>> but less important as it does not affect the fact that damage can be
>> done in the first place.  Therefore, saving the revoke efforts until

I'm going to edit this: s/saving/deferring/

>> we have better XML tracking of what chain operations are in effect,
>> including across a libvirtd restart, is reasonable.
> 
> Umm. I don't know that I agree with your statement about the relative
> importance of undoing all the labelling when you're done, but if the
> alternative to this is to require turning off selinux, then it's
> definitely a win.

Any suggestions on wording I could use instead?  Yeah, I'm a bit miffed
that I don't have code working to revoke access in all situations, but I
do revoke access in the trivial situations (that is, the non-blocking
situations are trivial, but the real work will be tracking what rights
need to be revoked after an asynchronous operation completes).  And
block-pull and block-copy suffer from the same problem, but at least
once I have the infrastructure in place, then all three code paths will
be able to use it.

>> +    if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_DEVICES) &&
>> +        virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0)) {
> 
> It looks to me like everything in cgroup.c returns < 0 ( -errno in most,
> if not all cases). Any idea why all its callers are testing for != 0
> instead of the more standard < 0 ?
> 
> (you're just following the convention, so no complaints on your patch,
> but it looks a bit odd.)

Eww - I didn't even notice that.  It does look like it's worth a
followup patch to check for < 0 when considering errors.  I'll fix this
one in-place, since you noticed, but that's minor, so I won't bother
with a v4.

>> +    if (ret < 0) {
>> +        /* Revert access to read-only, if possible.  */
>> +        qemuDomainPrepareDiskChainElement(driver, vm, cgroup, disk, base_canon,
>> +                                          VIR_DISK_CHAIN_READ_ONLY);
>> +        if (top_parent && top_parent != disk->src)
>> +            qemuDomainPrepareDiskChainElement(driver, vm, cgroup, disk,
>> +                                              top_parent,
>> +                                              VIR_DISK_CHAIN_READ_ONLY);
> 
> Is it always the case that it was previously read-only, or is this just
> a naive assumption?

Guaranteed.  Remember, the high-level concept is that we are going from:

base <- top <- parent <- active

to

base <- parent <- active

Since qemu is already running, we know that 'active' is open read/write,
and that 'base', 'top', and 'parent' are open read-only as backing
files, prior to starting of the operation.  Qemu then has to write to
'base' (to commit the data from 'top'), and to 'parent' (to update its
backing file once 'top' is no longer in the chain), hence our grant of
read/write to those two files.  If the operation fails, then revoking
immediately to read-only gets us right back where we started.  It also
explains why I'm careful to check whether top_parent == disk->src (in
that case, the parent is already read-write, and must not be revoked to
read-only).

On the other hand, if the operation is successful (that is, at the point
we receive the VIR_DOMAIN_BLOCK_JOB_COMPLETE event in qemu_process.c),
it would be nice for us to then revoke 'base' and 'parent' to read-only
and revoke 'top' to no access since it is no longer part of the chain
(and that's the patch that I'm deferring, because it requires tracking
_which_ files need tightened labels even across libvirtd restarts, and
even if the qemu event is missed because it happened during the libvirtd
restart).  In fact, if the operation starts successfully, but is later
canceled mid-stream, we also have access rights to revoke (here, the
VIR_DOMAIN_BLOCK_JOB_CANCELLED event in qemu_process.c), it's just that
fewer files would be revoked (that is, 'top' is still in the chain as
read-only if the job is cancelled).

> 
>> +    }
>> +    if (cgroup)
>> +        virCgroupFree(&cgroup);
>>      if (qemuDomainObjEndJob(driver, vm) == 0) {
>>          vm = NULL;
>>          goto cleanup;
> 
> ACK.

Given the state of your reviews, I'll make those minor tweaks and push
shortly.  Thanks again for going through all of these patches.

-- 
Eric Blake   eblake@xxxxxxxxxx    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]