On Wed, 2021-08-18 at 00:37 +0200, Paolo Bonzini wrote: > On Tue, Aug 17, 2021 at 11:54 PM Steve Rutherford > <srutherford@xxxxxxxxxx> wrote: > > > 1) the easy one: the bottom 4G of guest memory are mapped in the > > > mirror > > > VM 1:1. The ram_addr_t-based addresses are shifted by either 4G > > > or a > > > huge value such as 2^42 (MAXPHYADDR - physical address reduction > > > - 1). > > > This even lets the migration helper reuse the OVMF runtime > > > services > > > memory map (but be careful about thread safety...). > > > > If I understand what you are proposing, this would only work for > > SEV/SEV-ES, since the RMP prevents these remapping games. This > > makes > > me less enthusiastic about this (but I suspect that's why you call > > this less future proof). > > I called it less future proof because it allows the migration helper > to rely more on OVMF details, but those may not apply in the future. > > However you're right about SNP; the same page cannot be mapped twice > at different GPAs by a single ASID (which includes the VM and the > migration helper). :( That does throw a wrench in the idea of mapping > pages by ram_addr_t(*), and this applies to both schemes. Right, but in the current IBM approach, since we use the same mapping for guest and mirror, we have the same GPA in both and it should work with -SNP. > Migrating RAM in PCI BARs is a mess anyway for SNP, because PCI BARs > can be moved and every time they do the migration helper needs to > wait for validation to happen. :( Realistically, migration is becoming a royal pain, not just for confidential computing, but for virtual functions in general. I really think we should look at S3 suspend, where we shut down the drivers and then reattach on S3 resume as the potential pathway to getting migration working both for virtual functions and this use case. James