On Sun, 27 Aug 2017 22:17:55 -0700 Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > On Sun, Aug 27, 2017 at 6:29 PM, Nicholas Piggin <npiggin@xxxxxxxxx> wrote: > > > > BTW. since you are looking at this stuff, one other small problem I remember > > with exclusive waiters is that losing to a concurrent locker puts them to > > the back of the queue. I think that could be fixed with some small change to > > the wait loops (first add to tail, then retries add to head). Thoughts? > > No, not that way. > > First off, it's oddly complicated, but more importantly, the real > unfairness you lose to is not other things on the wait queue, but to > other lockers that aren't on the wait-queue at all, but instead just > come in and do a "test-and-set" without ever even going through the > slow path. Right, there is that unfairness *as well*. The requeue-to-tail logic seems to make that worse and I thought it seemed like a simple way to improve it. > > So instead of playing queuing games, you'd need to just change the > unlock sequence. Right now we basically do: > > - clear lock bit and atomically test if contended (and we play games > with bit numbering to do that atomic test efficiently) > > - if contended, wake things up > > and you'd change the logic to be > > - if contended, don't clear the lock bit at all, just transfer the > lock ownership directly to the waiters by walking the wait list > > - clear the lock bit only once there are no more wait entries (either > because there were no waiters at all, or because all the entries were > just waiting for the lock to be released) > > which is certainly doable with a couple of small extensions to the > page wait key data structure. Yeah that would be ideal. Conceptually trivial, I guess care has to be taken with transferring the memory ordering with the lock. Could be a good concept to apply elsewhere too. > > But most of my clever schemes the last few days were abject failures, > and honestly, it's late in the rc. > > In fact, this late in the game I probably wouldn't even have committed > the small cleanups I did if it wasn't for the fact that thinking of > the whole WQ_FLAG_EXCLUSIVE bit made me find the bug. > > So the cleanups were actually what got me to look at the problem in > the first place, and then I went "I'm going to commit the cleanup, and > then I can think about the bug I just found". > > I'm just happy that the fix seems to be trivial. I was afraid I'd have > to do something nastier (like have the EINTR case send another > explicit wakeup to make up for the lost one, or some ugly hack like > that). > > It was only when I started looking at the history of that code, and I > saw the old bit_lock code, and I went "Hmm. That has the _same_ bug - > oh wait, no it doesn't!" that I realized that there was that simple > fix. > > You weren't cc'd on the earlier part of the discussion, you only got > added when I realized what the history and simple fix was. You're right, no such improvement would be appropriate for 4.14. Thanks, Nick -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>