On Mon, Feb 11, 2019 at 11:31 AM Waiman Long <longman@xxxxxxxxxx> wrote: > > Modify __down_read_trylock() to make it generate slightly better code > (smaller and maybe a tiny bit faster). This looks good, but I would ask you to try one slightly different approach. Instead of this: > long tmp = atomic_long_read(&sem->count); > > while (tmp >= 0) { > if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, > tmp + RWSEM_ACTIVE_READ_BIAS)) { > return 1; > } > } try doing this instead: long tmp = 0; do { if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, tmp + RWSEM_ACTIVE_READ_BIAS)) { return 1; } while (tmp >= 0); return 0; because especially when it comes to locking, it's usually better to just *guess* that the lock is unlocked, than it is to actually read from the line to see what the state is. Often - but certainly not always - the lock is the first access to the target cacheline, and assuming the trylock is successful (which I think is the case we want to optimize for), we're much better off causing that first access to be a read-for-ownership, rather than a read-for-sharing. Because if you first read from the line, and then do a cmpxchg, and if the line was not in the cache, your cache coherency protocol will generally go through two states: first shared (for the initial read) and then exclusive-dirty (for the cmpxchg). Now, this is obviously very micro-architecture dependent, and in fact the microarchitecture could even see the "predict fallthrough to a cmpxchg with the same address" and turn the first read into a read-for-ownership, but we've done this at some point before, and the "guess unlocked" was actually the one that performed better. Of course, the downside is that it might be worse when the guess is incorrect - either because of a nested read lock or due to an actual conflict with a write, but on the whole those *should* be the rare cases, and not the cases where we necessarily optimize for latency of the operation. Hmm? Linus