On Mon, Aug 16, 2021 at 04:15:46PM +0200, Paolo Bonzini wrote: > Hi, > > first of all, thanks for posting this work and starting the discussion. > > However, I am not sure if the in-guest migration helper vCPUs should use > the existing KVM support code. For example, they probably can just > always work with host CPUID (copied directly from > KVM_GET_SUPPORTED_CPUID), and they do not need to interface with QEMU's > MMIO logic. They would just sit on a "HLT" instruction and communicate > with the main migration loop using some kind of standardized ring buffer > protocol; the migration loop then executes KVM_RUN in order to start the > processing of pages, and expects a KVM_EXIT_HLT when the VM has nothing > to do or requires processing on the host. > The migration helper can then also use its own address space, for > example operating directly on ram_addr_t values with the helper running > at very high virtual addresses. Migration code can use a > RAMBlockNotifier to invoke KVM_SET_USER_MEMORY_REGION on the mirror VM > (and never enable dirty memory logging on the mirror VM, too, which has > better performance). > > With this implementation, the number of mirror vCPUs does not even have > to be indicated on the command line. The VM and its vCPUs can simply be > created when migration starts. In the SEV-ES case, the guest can even > provide the VMSA that starts the migration helper. It might make sense to tweak the mirror support code so that it is more closely tied to migration and the migration handler. On the other hand, the usage of a mirror VM might be more general than just migration. In some ways the mirror offers similar functionality to the VMPL in SNP, providing a way to run non-workload code inside the enclave. This potentially has uses beyond migration. If this is the case, do maybe we want to keep the mirror more general. It's also worth noting that the SMP interface that Ashish is using to specify the mirror might come in handy if we ever want to have more than one vCPU in the mirror. For instance we might want to use multiple MH vCPUs to increase throughput. -Tobin > The disadvantage is that, as you point out, in the future some of the > infrastructure you introduce might be useful for VMPL0 operation on > SEV-SNP. My proposal above might require some code duplication. > However, it might even be that VMPL0 operation works best with a model > more similar to my sketch of the migration helper; it's really too early > to say. > > Paolo