Re: [RFC PATCH 00/13] Add support for Mirror VM.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/08/21 15:25, Ashish Kalra wrote:
From: Ashish Kalra<ashish.kalra@xxxxxxx>

This is an RFC series for Mirror VM support that are
essentially secondary VMs sharing the encryption context
(ASID) with a primary VM. The patch-set creates a new
VM and shares the primary VM's encryption context
with it using the KVM_CAP_VM_COPY_ENC_CONTEXT_FROM capability.
The mirror VM uses a separate pair of VM + vCPU file
descriptors and also uses a simplified KVM run loop,
for example, it does not support any interrupt vmexit's. etc.
Currently the mirror VM shares the address space of the
primary VM.

The mirror VM can be used for running an in-guest migration
helper (MH). It also might have future uses for other in-guest
operations.

Hi,

first of all, thanks for posting this work and starting the discussion.

However, I am not sure if the in-guest migration helper vCPUs should use the existing KVM support code. For example, they probably can just always work with host CPUID (copied directly from KVM_GET_SUPPORTED_CPUID), and they do not need to interface with QEMU's MMIO logic. They would just sit on a "HLT" instruction and communicate with the main migration loop using some kind of standardized ring buffer protocol; the migration loop then executes KVM_RUN in order to start the processing of pages, and expects a KVM_EXIT_HLT when the VM has nothing to do or requires processing on the host.

The migration helper can then also use its own address space, for example operating directly on ram_addr_t values with the helper running at very high virtual addresses. Migration code can use a RAMBlockNotifier to invoke KVM_SET_USER_MEMORY_REGION on the mirror VM (and never enable dirty memory logging on the mirror VM, too, which has better performance).

With this implementation, the number of mirror vCPUs does not even have to be indicated on the command line. The VM and its vCPUs can simply be created when migration starts. In the SEV-ES case, the guest can even provide the VMSA that starts the migration helper.

The disadvantage is that, as you point out, in the future some of the infrastructure you introduce might be useful for VMPL0 operation on SEV-SNP. My proposal above might require some code duplication. However, it might even be that VMPL0 operation works best with a model more similar to my sketch of the migration helper; it's really too early to say.

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux