Re: [PATCH] KVM: x86: move the can_unsync and prefetch checks outside of the loop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 07, 2022, Sean Christopherson wrote:
>> NAK, this change is functionally wrong.  The checks are inside the loop because
>> the flow fails if and only if there is at least one indirect, valid shadow pages
>> at the target gfn.  The @prefetch check is even more restrictive as it bails if
>> there is at least one indirect, valid, synchronized shadow page.
>> The can_unsync check could be "optimized" to
>>
>>       if (!can_unsync && kvm_gfn_has_indirect_valid_sp())
>>
>> but identifying whether or not there's a valid SP requires walking the list of
>> shadow pages for the gfn, so it's simpler to just handle the check in the loop.
>> And "optimized" in quotes because both checks will be well-predicted single-uop
>> macrofused TEST+Jcc on modern CPUs, whereas walking the list twice would be
>> relatively expensive if there are shadow pages for the gfn.


So this change isn't safe. I will look into the optimization suggested
by Sean. Sorry for this patch.


On Fri, Jan 7, 2022 at 11:46 PM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
>
> On 1/7/22 09:25, Vihas Mak wrote:
> > mmu_try_to_unsync_pages() performs !can_unsync check before attempting
> > to unsync any shadow pages.
> > This check is peformed inside the loop right now.
> > It's redundant to perform it every iteration if can_unsync is true, as
> > can_unsync parameter isn't getting updated inside the loop.
> > Move the check outside of the loop.
> >
> > Same is the case with prefetch.
>
> The meaning changes if the loop does not execute at all.  Is this safe?
>
> Paolo
>
> > Signed-off-by: Vihas Mak<makvihas@xxxxxxxxx>
> > Cc: Sean Christopherson<seanjc@xxxxxxxxxx>
> > Cc: Vitaly Kuznetsov<vkuznets@xxxxxxxxxx>
> > Cc: Wanpeng Li<wanpengli@xxxxxxxxxxx>
> > Cc: Jim Mattson<jmattson@xxxxxxxxxx>
> > Cc: Joerg Roedel<joro@xxxxxxxxxx>
> > ---
> >   arch/x86/kvm/mmu/mmu.c | 11 +++++------
> >   1 file changed, 5 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 1d275e9d7..53f4b8b07 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2586,6 +2586,11 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
> >       if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE))
> >               return -EPERM;
> >
> > +     if (!can_unsync)
> > +             return -EPERM;
> > +
> > +     if (prefetch)
> > +             return -EEXIST;
> >       /*
> >        * The page is not write-tracked, mark existing shadow pages unsync
> >        * unless KVM is synchronizing an unsync SP (can_unsync = false).  In
> > @@ -2593,15 +2598,9 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
> >        * allowing shadow pages to become unsync (writable by the guest).
> >        */
> >       for_each_gfn_indirect_valid_sp(kvm, sp, gfn) {
> > -             if (!can_unsync)
> > -                     return -EPERM;
> > -
> >               if (sp->unsync)
> >                       continue;
> >
> > -             if (prefetch)
> > -                     return -EEXIST;
> > -
>


--
Thanks,
Vihas



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux