Re: [PATCH v2 7/7] qemu_security: Lock metadata while relabelling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 23, 2018 at 04:54:01PM +0200, Michal Privoznik wrote:
> On 08/20/2018 07:17 PM, Michal Prívozník wrote:
> > On 08/20/2018 05:16 PM, Daniel P. Berrangé wrote:
> >> On Tue, Aug 14, 2018 at 01:19:43PM +0200, Michal Privoznik wrote:
> >>> Fortunately, we have qemu wrappers so it's sufficient to put
> >>> lock/unlock call only there.
> >>>
> >>> Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx>
> >>> ---
> >>>  src/qemu/qemu_security.c | 107 +++++++++++++++++++++++++++++++++++++++++++++++
> >>>  1 file changed, 107 insertions(+)
> >>>
> 
> 
> >> I'm wondering if this is really the right level to plug in the metadata
> >> locking ?  What about if we just pass a virLockManagerPtr to the security
> >> drivers and let them lock each resource at the time they need to modify
> >> its metadata. This will trivially guarantee that we always lock the exact
> >> set of files that we'll be changing metadata on.
> >>
> >> eg in SELinux driver the virSecuritySELinuxSetFileconHelper method
> >> would directly call virLockManagerAcquire & virLockManagerRelease,
> >> avoiding the entire virDomainLock  abstraction which was really
> >> focused around managing the content locks around lifecycle state
> >> changes.
> >>
> > 
> > Yeah, I vaguely recall writing code like this (when I was trying to
> > solve this some time ago). Okay, let me see if that's feasible.
> > 
> > But boy, this is getting hairy.
> 
> So as I'm writing these patches I came to realize couple of things:
> 
> a) instead of domain PID we should pass libvirtd PID because we want to
> release the locks if libvirtd dies not domain.
> 
> b) that, however, leads to a problem because
> virLockManagerLockDaemonAcquire() closes the socket to virtlockd causing
> it to kill the owner of the lock, i.e. it kills libvirtd immediately.

This is fine ;-P

> c) even if I hack around b) so that we connect only once for each
> lock+unlock pair call, we would still connect dozens of times when
> starting a domain (all the paths we label times all active secdrivers).
> So we should share connection here too.

Yeah, makes sense.

> Now question is how do do this effectively and cleanly (code-wise). For
> solving b) we can have a new flag, say VIR_LOCK_MANAGER_KEEP_CONNECT
> that would cause  virLockManagerLockDaemonAcquire() to save the
> (connection, program, counter) tuple somewhere into lock driver private
> data so that virLockManagerLockDaemonRelease() called with the same flag
> can re-use the data.
> 
> For c) I guess we will need to open the connection in
> virLockManagerLockDaemonNew(), put the socket FD into event loop so that
> EOF is caught and initiate reopen in that case. However, I'm not sure if
> this is desirable - to be constantly connected to virtlockd.

Can we use a model similar to what I did for the shared secondary
driver connections.

By default a call to virGetConnectNetwork() will open a new connection.

To avoid opening & closing 100's of connections though, some places
will call virSetConnectNetwork() to store a pre-opened connection in
a thread local. That stays open until virSetConnectNetwork is called
again passing in a NULL.

We would put such a cache around any bits of code that triggers
many connection calls to virlockd.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux