On 04/26/2010 12:56 AM, Avi Kivity wrote:
On 04/26/2010 04:53 AM, Anthony Liguori wrote:
On 04/25/2010 06:51 AM, Avi Kivity wrote:
It depends on what things you think are important. A lot of
libvirt's complexity is based on the fact that it uses a daemon and
needs to deal with the security implications of that. You don't
need explicit labelling if you don't use a daemon.
I don't follow. If you have multiple guests that you want off each
other's turf you have to label their resources, either statically or
dynamically. How is it related to a daemon being present?
Because libvirt has to perform this labelling because it loses the
original user's security context.
If you invoke qemu with the original user's credentials that launched
the guest, then you don't need to do anything special with respect to
security.
IOW, libvirt does not run guests as separate users which is why it
needs to deal with security in the first place.
What if one user has multiple guests? isolation is still needed.
Don't confuse a management application's concept of users with using
separate uid's to launch guests.
One user per guest does not satisfy some security requirements. The
'M' in selinux stands for mandatory, which means that the entities
secured can't leak information even if they want to (scenario: G1
breaks into qemu, chmods files, G2 breaks into qemu, reads files).
If you're implementing a chinese firewall policy, then yes, you want to
run each guest as a separate selinux context. Starting as separate
users and setting DAC privileges appropriately will achieve this.
But you're not always implementing that type of policy. If the guest
inherits the uid, selinux context, and namespaces of whatever launches
the guest, then you have the most flexibility from a security perspective.
How do you launch a libvirt guest in a network namespace? How do you
put it in a chroot? Today, you have to make changes to libvirt whereas
in a direct launch model, you get all of the neat security features
linux supports for free.
And I've said in the past that I don't like the idea of a qemud :-)
I must have missed it. Why not? Every other hypervisor has a central
management entity.
Because you end up launching all guests from a single security context.
In theory, it does support this with the session urls but they are
currently second-class citizens in libvirt. The remote dispatch
also adds a fair bit of complexity and at least for the use-cases
I'm interested in, it's not an important feature.
If libvirt needs a local wrapper for interesting use cases, then it
has failed. You can't have a local wrapper with the esx driver, for
example.
This is off-topic, but can you detail why you don't want remote
dispatch (I assume we're talking about a multiple node deployment).
Because there are dozens of remote management APIs and then all have
a concept of agents that run on the end nodes. When fitting
virtualization management into an existing management infrastructure,
you are going to always use a local API.
When you manage esx, do you deploy an agent? I thought it was all
done via their remote APIs.
Historically, people have deployed agents into the console OS. In
recent versions, ESX actually includes CIM agents by default.
Every typical virtualization use will eventually grow some
non-typical requirements. If libvirt explicitly refuses to support
qemu features, I don't see how we can recommend it - even if it
satisfies a user's requirements today, what about tomorrow? what
about future qemu feature, will they be exposed or not?
If that is the case then we should develop qemud (which libvirt and
other apps can use).
(even if it isn't the case I think qemud is a good idea)
Yeah, that's where I'm at. I'd eventually like libvirt to use our
provided API and I can see where it would add value to the stack (by
doing things like storage and network management).
We do provide an API, qmp, and libvirt uses it?
Yeah, but we need to support more features (like guest enumeration).
The alternative is to get libvirt to just act as a thin layer to
expose qemu features directly. But honestly, what's the point of
libvirt if they did that?
For most hypervisors, that's exactly what libvirt does. For Xen, it
also bypasses Xend and the hypervisor's API, but it shouldn't really.
Historically, xend was so incredibly slow (especially for frequent
statistics collection) that it was a necessity.
Ah, reimplement rather than fix.
There's a complicated history there.
Qemu is special due to the nonexistence of qemud.
Why is sVirt implemented in libvirt? it's not the logical place for
it; rather the logical place doesn't exist.
sVirt is not just implemented in libvirt. libvirt implements a
mechanism to set the context of a given domain and dynamically label
it's resources to isolate it.
The reason it has to assign a context to a given domain is that all
domains are launched from the same security context (the libvirtd
context) as the original user's context (the consumer of the libvirt
API) has been lost via the domain socket interface.
If you used the /session URL, then the domain would have the security
context of whomever created the guest which means that dynamic
labelling of the resources wouldn't be necessary (you would just do
static labelling).
This is certainly a more secure model and it's a feature of qemu that
I really wish didn't get lost in libvirt. Again, /session can do
this too but right now, /session really isn't usable in libvirt for
qemu.
That's wrong for three reasons. First, selinux is not a uid
replacement (if it was libvirt could just suid $random_user before
launching qemu). Second, a single user's guests should be protected
from each other. Third, in many deployments, the guest's owner isn't
logged in to supply the credentials, it's system management that
launches the guests.
(1) uid's are just one part of an applications security context.
There's an selinux context, all of the various namespaces, capabilities,
etc. If you use a daemon to launch a guest, you lose all of that unless
you have a very sophisticated api.
(2) If you want to implement a policy that only a single guest can
access a single image, you can create an SELinux policy and use static
labelling to achieve that. That's just one type of policy though.
(3) The system management application can certainly create whatever
context it wants to launch a vm from. It's comes down to who's
responsible for creating the context the guest runs under. I think
doing that at the libvirt level takes away a ton of flexibility from the
management application.
Regards,
Anthony Liguori
There's also the case of resources that can't be permanently chowned
or assigned a security label, like disk volumes or assignable devices.
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list