On 08/18/2009 02:49 PM, Michael S. Tsirkin wrote:
The host kernel sees a hypercall vmexit. How does it know if it's a
nested-guest-to-guest hypercall or a nested-guest-to-host hypercall?
The two are equally valid at the same time.
Here is how this can work - it is similar to MSI if you like:
- by default, the device uses pio kicks
- nested guest driver can enable hypercall capability in the device,
probably with pci config cycle
- guest userspace (hypervisor running in guest) will see this request
and perform pci config cycle on the "real" device, telling it to which
nested guest this device is assigned
So far so good.
- host userspace (hypervisor running in host) will see this.
it now knows both which guest the hypercalls will be for,
and that the device in question is an emulated one,
and can set up kvm appropriately
No it doesn't. The fact that one device uses hypercalls doesn't mean
all hypercalls are for that device. Hypercalls are a shared resource,
and there's no way to tell for a given hypercall what device it is
associated with (if any).
The host knows whether the guest or nested guest are running. If the
guest is running, it's a guest-to-host hypercall. If the nested guest
is running, it's a nested-guest-to-guest hypercall. We don't have
nested-guest-to-host hypercalls (and couldn't unless we get agreement on
a protocol from all hypervisor vendors).
Not necessarily. What I am saying is we could make this protocol part of
guest paravirt driver. the guest that loads the driver and enables the
capability, has to agree to the protocol. If it doesn't want to, it does
not have to use that driver.
It would only work for kvm-on-kvm.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html