Re: [PATCH 09/54] KVM: x86/mmu: Unconditionally zap unsync SPs when creating >4k SP at GFN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/06/21 17:08, Sean Christopherson wrote:
Because the shadow page's level is incorporated into its role, if the level of
the new page is >4k, the branch at (1) will be taken for all 4k shadow pages.

Maybe something like this for a comment?

Good, integrated.

Though I also wonder why breaking out of the loop early is okay.  Initially I thought
that zapping only matters if there's no existing page with the desired role,
because otherwise the unsync page would have been zapped already by an earlier
kvm_get_mmu_page, but what if the page was synced at the time of kvm_get_mmu_page
and then both were unsynced?

It may be easier to just split the loop to avoid that additional confusion,
something like:

        /*
         * If the guest is creating an upper-level page, zap unsync pages
         * for the same gfn, because the gfn will be write protected and
         * future syncs of those unsync pages could happen with an incompatible
         * context.  While it's possible the guest is using recursive page
         * tables, in all likelihood the guest has stopped using the unsync
         * page and is installing a completely unrelated page.
         */
        if (level > PG_LEVEL_4K) {
                for_each_valid_sp(vcpu->kvm, sp, sp_list)
                        if (sp->gfn == gfn && sp->role.word != role.word && sp->unsync)
                                kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
                                                         &invalid_list);
        }

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux