Re: [PATCH] KVM: x86/mmu: Remove KVM MMU write lock when accessing indirect_shadow_pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 6, 2023 at 5:28 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Tue, Jun 06, 2023, Mingwei Zhang wrote:
> > > > Hmm. I agree with both points above, but below, the change seems too
> > > > heavyweight. smp_wb() is a mfence(), i.e., serializing all
> > > > loads/stores before the instruction. Doing that for every shadow page
> > > > creation and destruction seems a lot.
> > >
> > > No, the smp_*b() variants are just compiler barriers on x86.
> >
> > hmm, it is a "lock addl" now for smp_mb(). Check this: 450cbdd0125c
> > ("locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE")
> >
> > So this means smp_mb() is not a free lunch and we need to be a little
> > bit careful.
>
> Oh, those sneaky macros.  x86 #defines __smp_mb(), not the outer helper.  I'll
> take a closer look before posting to see if there's a way to avoid the runtime
> barrier.

Checked again, I think using smp_wmb() and smp_rmb() should be fine as
those are just compiler barriers. We don't need a full barrier here.

Thanks.
-Mingwei




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux