Document it to Documentation/virtual/kvm/mmu.txt Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> --- Documentation/virtual/kvm/mmu.txt | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt index f5c4de9..9b7cfb3 100644 --- a/Documentation/virtual/kvm/mmu.txt +++ b/Documentation/virtual/kvm/mmu.txt @@ -396,6 +396,31 @@ ensures the old pages are not used any more. The invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen) are zapped by using lock-break technique. +Fast invalidate all mmio sptes +=========== +As mentioned in "Reaction to events" above, kvm will cache the mmio information +to the last sptes so that we should zap all mmio sptes when the guest mmio info +is changed. This will happen when a new memslot is added or the existing +memslot is moved. + +Zapping mmio spte is also a scalability issue for the large memory and large +vcpus guests since it needs to hold hot mmu-lock and walk all shadow pages to +find all the mmio spte out. + +We fix this issue by using the similar way of "Fast invalidate all pages". +The global mmio valid generation-number is stored in kvm->memslots.generation +and every mmio spte stores the current global generation-number into his +available bits when it is created + +The global mmio valid generation-number is increased whenever the gust memory +info is changed. When guests do mmio access, kvm intercepts a MMIO #PF then it +walks the shadow page table and get the mmio spte. If the generation-number on +the spte does not equal the global generation-number, it will go to the normal +#PF handler to update the mmio spte. + +Since 19 bits are used to store generation-number on mmio spte, we zap all pages +when the number is round. + Further reading =============== -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html