Re: x86 MMU: RMap Interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 14, 2020 at 11:44:49PM -0400, contact@xxxxxxxxxxxxxxxxx wrote:
> Thanks!
> 
> Given this info, am I correct in saying that all non-MMIO guest pages are
> (1) added to the rmap upon being marked present, and (2) removed from the
> rmap upon being marked non-present?
> 
> I primarily ask because I'm observing behavior (running x86-64 guest with
> TDP/EPT enabled) wherein multiple SPTEs appear to be added to the rmap for
> the same GFN<->PFN mapping (sometimes later followed by multiple removals of
> the same GFN<->PFN mapping). My understanding was that, for a given guest,
> each GFN<->PFN mapping corresponds to exactly one rmap entry (and vice
> versa). Is this incorrect?
> 
> I observe the behavior I mentioned whether I log upon rmap updates, or upon
> mmu_spte_set() (for non-present->present) and mmu_clear_track_bits() (for
> present->non-present). Perhaps I'm missing a more obvious interface for
> logging when the PFNs backing guest pages are marked as present/non-present?

The basic premise is correct, but there are exceptions (or rather, at least
one exception that immediately comes to mind).  With TDP and no nested VMs,
a given instance of the MMU will have a 1:1 GFN:PFN mapping.  But, if the
MMU is recreated (reloaded with a different EPTP), e.g. as part of a fast
zap, then there may be mappings for the GFN:PFN in both the old MMU/EPTP
instance and the new MMU/EPTP instance, and thus multiple rmaps.

KVM currently does a fast zap (and MMU reload) when deleting memslots, which
happens multiple times during boot, so the behavior you're observing is
expected.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux