On 03/02/21 16:00, David Woodhouse wrote:
This patch set provides enough kernel support to allow hosting Xen HVM guests in KVM. It allows hypercalls to be trapped to userspace for handling, uses the existing KVM functions for writing system clock and pvclock information to Xen shared pages, and event channel upcall vector delivery. It's based on the first section of a patch set that Joao posted as RFC last year^W^W in 2019: https://lore.kernel.org/kvm/20190220201609.28290-1-joao.m.martins@xxxxxxxxxx/ In v6 I've dropped the runstate support temporarily. It can come in the next round of patches, and I want to give it more thought. In particular Paul pointed out that we need to support VCPUOP_get_runstate_info — the runstate times aren't *only* exposed to a guest by putting them directly into the guest memory. So we'll need an ioctl to fetch them to userspace as well as to set them on live migration. I've expanded the padding in the newly added KVM_XEN_VCPU_[SG]ET_ATTR ioctls to make sure there's room. I also want to double-check we're setting the runstates faithfully as Xen guests will expect in all circumstances. I think we may want a way for userspace to tell the kernel to set RUNSTATE_blocked and offline, and that can be set as a vCPU attr too. Will work on that and post it along with the oft-promised second round, but this part stands alone and should be ready to merge. The rust-vmm support for this is starting to take shape at https://github.com/alexandruag/vmm-reference/commits/xen
It passes the self tests, after fixing the self tests to compile, so it must be perfect. Oh wait. :)
Seriously: this is very nice work. I agree with Christoph that it should be possible to hide it with Kconfig, but I can take care of that and it need not block the inclusion in linux-next.
I've queued it to kvm/queue for now; as soon as the integration tests finish (the amount of new stuff in 5.12 is pretty scary), it will be in kvm/next too.
Thanks very much! Paolo