On Thu, Jul 06, 2017 at 06:50:36PM +0200, Peter Zijlstra wrote: > On Thu, Jul 06, 2017 at 09:20:24AM -0700, Paul E. McKenney wrote: > > On Thu, Jul 06, 2017 at 06:05:55PM +0200, Peter Zijlstra wrote: > > > On Thu, Jul 06, 2017 at 02:12:24PM +0000, David Laight wrote: > > > > From: Paul E. McKenney > > > > [ . . . ] > > > > > Now on the one hand I feel like Oleg that it would be a shame to loose > > > the optimization, OTOH this thing is really really tricky to use, > > > and has lead to a number of bugs already. > > > > I do agree, it is a bit sad to see these optimizations go. So, should > > this make mainline, I will be tagging the commits that spin_unlock_wait() > > so that they can be easily reverted should someone come up with good > > semantics and a compelling use case with compelling performance benefits. > > Ha!, but what would constitute 'good semantics' ? > > The current thing is something along the lines of: > > "Waits for the currently observed critical section > to complete with ACQUIRE ordering such that it will observe > whatever state was left by said critical section." > > With the 'obvious' benefit of limited interference on those actually > wanting to acquire the lock, and a shorter wait time on our side too, > since we only need to wait for completion of the current section, and > not for however many contender are before us. > > Not sure I have an actual (micro) benchmark that shows a difference > though. > > > > Is this all good enough to retain the thing, I dunno. Like I said, I'm > conflicted on the whole thing. On the one hand its a nice optimization, > on the other hand I don't want to have to keep fixing these bugs. As I've said, I'd be keen to see us drop this and bring it back if/when we get a compelling use-case along with performance numbers. At that point, we'd be in a better position to define the semantics anyway, knowing what exactly is expected by the use-case. Will -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html