On Wed, 21 May 2014, Vladimir Davydov wrote: > Seems I've found a better way to avoid this race, which does not involve > messing up free hot paths. The idea is to explicitly zap each per-cpu > partial list by setting it pointing to an invalid ptr. Since > put_cpu_partial(), which is called from __slab_free(), uses atomic > cmpxchg for adding a new partial slab to a per cpu partial list, it is > enough to add a check if partials are zapped there and bail out if so. > > The patch doing the trick is attached. Could you please take a look at > it once time permit? Well if you set s->cpu_partial = 0 then the slab should not be added to the partial lists. Ok its put on there temporarily but then immediately moved to the node partial list in put_cpu_partial(). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>