On Fri, Jun 21, 2013 at 5:00 PM, Davidlohr Bueso <davidlohr.bueso@xxxxxx> wrote: > On Fri, 2013-06-21 at 16:51 -0700, Tim Chen wrote: >> In this patchset, we introduce two optimizations to read write semaphore. >> The first one reduces cache bouncing of the sem->count field >> by doing a pre-read of the sem->count and avoid cmpxchg if possible. >> The second patch introduces similar optimistic spining logic in >> the mutex code for the writer lock acquisition of rw-sem. >> >> Combining the two patches, in testing by Davidlohr Bueso on aim7 workloads >> on 8 socket 80 cores system, he saw improvements of >> alltests (+14.5%), custom (+17%), disk (+11%), high_systime >> (+5%), shared (+15%) and short (+4%), most of them after around 500 >> users when i_mmap was implemented as rwsem. >> >> Feedbacks on the effectiveness of these tweaks on other workloads >> will be appreciated. > > Tim, I was really hoping to send all this in one big bundle. I was doing > some further testing (enabling hyperthreading and some Oracle runs), > fortunately everything looks ok and we are getting actual improvements > on large boxes. > > That said, how about I send you my i_mmap rwsem patchset for a v2 of > this patchset? I'm a bit confused about the state of these patchsets - it looks like I'm only copied into half of the conversations. Should I wait for a v2 here, or should I hunt down for Alex's version of things, or... ? -- Michel "Walken" Lespinasse A program is never fully debugged until the last user dies. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>