On 16/09/2015 10:57, Christian Borntraeger wrote: > Am 16.09.2015 um 10:32 schrieb Paolo Bonzini: >> >> >> On 15/09/2015 19:38, Paul E. McKenney wrote: >>> Excellent points! >>> >>> Other options in such situations include the following: >>> >>> o Rework so that the code uses call_rcu*() instead of *_expedited(). >>> >>> o Maintain a per-task or per-CPU counter so that every so many >>> *_expedited() invocations instead uses the non-expedited >>> counterpart. (For example, synchronize_rcu instead of >>> synchronize_rcu_expedited().) >> >> Or just use ratelimit (untested): > > One of my tests was to always replace synchronize_sched_expedited with > synchronize_sched and things turned out to be even worse. Not sure if > it makes sense to test yopur in-the-middle approach? I don't think it applies here, since down_write/up_write is a synchronous API. If the revert isn't easy, I think backporting rcu_sync is the best bet. The issue is that rcu_sync doesn't eliminate synchronize_sched, it only makes it more rare. So it's possible that it isn't eliminating the root cause of the problem. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html