Re: [PATCH v2 02/17] kvm: x86: Avoid taking MMU lock in kvm_mmu_sync_roots if no sync is needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27/06/2018 05:05, Junaid Shahid wrote:
> As for the control dependency, Documentation/memory-barriers.txt says that load-store control dependencies are already ordered even without a barrier e.g.
> 
>         q = READ_ONCE(a);
>         if (q) {
>                 WRITE_ONCE(b, 1);
>         }
> 
> which roughly maps in this case to:
> 
>         q = READ_ONCE(sp->unsync) || READ_ONCE(sp->unsync_children);
>         if (q) {
>                 spin_lock();
> 		...
>         }
> 
> Since spin_lock() involves a store via the atomic test-and-set, so I
> would assume that it falls under the load-store control dependency case.

(At least in theory) the load part of spin_lock is acquire, but the
store is a separate thing even though it's atomic.  Therefore the load
can be moved before the "if".

In general control dependencies are tricky and IIRC they don't even have
a matching concept in C11, so I would not be afraid of avoiding them...

Thanks,

Paolo

> I do agree that there is basically no run-time cost to add a barrier
> here, but my concern is more about added confusion for someone
> modifying this code later. Nevertheless, if you think that we need to
> add it to be extra safe, then I guess I can do the loads via
> smp_load_acquire().




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux