On Mon, 2021-10-25 at 12:23 +0200, Paolo Bonzini wrote: > On 23/10/21 21:47, Woodhouse, David wrote: > > I almost switched to using a gfn_to_pfn_cache here and bailing out if > > kvm_map_gfn() fails, like kvm_steal_time_set_preempted() does — but on > > closer inspection it looks like kvm_map_gfn() will*always* fail in > > atomic context for a page in IOMEM, which means it will silently fail > > to make the update every single time for such guests, AFAICT. So I > > didn't do it that way after all. And will probably fix that one too. > > > > Not every single time, only if the cache is absent, stale or not > initialized. Hm, my reading of it suggests that it will fail even when the cache is valid, on IOMEM PFNs for which pfn_valid() is not set: if (pfn_valid(pfn)) { page = pfn_to_page(pfn); if (atomic) hva = kmap_atomic(page); else hva = kmap(page); #ifdef CONFIG_HAS_IOMEM } else if (!atomic) { hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); } else { return -EINVAL; #endif } > In the case of steal time, record_steal_time can update the cache > because it's never called from atomic context, while > kvm_steal_time_set_preempted is just advisory and therefore can fail > just fine. Sure, but it's an 'advisory' thing which is used for optimisations in the guest like spotting that the holder of a spinlock is preempted, and yielding to it? We didn't do those optimisations just for fun, did we? > One possible solution (which I even have unfinished patches for) is to > put all the gfn_to_pfn_caches on a list, and refresh them when the MMU > notifier receives an invalidation. For this use case I'm not even sure why I'd *want* to cache the PFN and explicitly kmap/memremap it, when surely by *definition* there's a perfectly serviceable HVA which already points to it?
Attachment:
smime.p7s
Description: S/MIME cryptographic signature