On Wed, 2022-11-23 at 18:16 +0000, Sean Christopherson wrote: > On Wed, Nov 23, 2022, David Woodhouse wrote: > > On Wed, 2022-11-23 at 17:17 +0000, Sean Christopherson wrote: > > And with or without that cache, we can *still* end up doing a partial > > update if the page goes away. The byte with the XEN_RUNSTATE_UPDATE bit > > might still be accessible, but bets are off about what state the rest > > of the structure is in — and those runtimes are supposed to add up, or > > the guest is going to get unhappy. > > Ugh. What a terrible ABI. > > > I'm actually OK with locking two GPCs. It wasn't my first choice, but > > it's reasonable enough IMO given that none of the alternatives jump out > > as being particularly attractive either. > > I detest the two GPCs, but since KVM apparently needs to provide "all or nothing" > updates, I don't see a better option. Actually I think it might be not be so awful to do a contiguous virtual (kernel) mapping of multiple discontiguous IOMEM PFNs. A kind of memremapv(). We can open-code something like ioremap_prot(), doing a single call to get_vm_area_caller() for the whole virtual size, then individually calling ioremap_page_range() for each page. So we *could* fix the GPC to cope with ranges which cross more than a single page. But if the runstate area is the only user for that, as seems to be the case so far, then it might be a bit too much additional complexity in an area which is fun enough already. I'll propose that we go ahead with the two-GPCs model for now, and if we ever need to do that *again*, then we look harder at making the GPC support multiple pages?
Attachment:
smime.p7s
Description: S/MIME cryptographic signature