On Mon, Mar 28, 2016 at 07:50:36PM -0500, Christoph Lameter wrote: > On Mon, 28 Mar 2016, js1304@xxxxxxxxx wrote: > > > Major kmem_cache metadata in slab subsystem is synchronized with > > the slab_mutex. In SLAB, if some of them is changed, node's shared > > array cache would be freed and re-populated. If __kmem_cache_shrink() > > is called at the same time, it will call drain_array() with n->shared > > without holding node lock so problem can happen. > > > > We can fix this small theoretical race condition by holding node lock > > in drain_array(), but, holding a slab_mutex in kmem_cache_shrink() > > looks more appropriate solution because stable state would make things > > less error-prone and this is not performance critical path. > > Ummm.. The mutex taking is added to common code. So this will also affect > SLUB. The patch needs to consider this. Do we want to force all > allocators to run shrinking only when holding the lock? SLUB does not > need to hold the mutex. And frankly the mutex is for reconfiguration of > metadata which is *not* occurring here. A shrink operation does not do > that. Can we figure out a slab specific way of handling synchronization > in the strange free/realloc cycle? > > It seems that taking the node lock is the appropriate level of > synchrnonization since the concern is with the contents of a shared cache > at that level. There is no change of metadata which would require the > mutex. Okay. I will fix it. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>