On Fri, 6 Jun 2014, Vladimir Davydov wrote: > This patch makes SLUB's implementation of kmem_cache_free > non-preemptable. As a result, synchronize_sched() will work as a barrier > against kmem_cache_free's in flight, so that issuing it before cache > destruction will protect us against the use-after-free. Subject: slub: reenable preemption before the freeing of slabs from slab_free I would prefer to call the page allocator with preemption enabled if possible. Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c 2014-05-29 11:45:32.065859887 -0500 +++ linux/mm/slub.c 2014-06-06 09:45:12.822480834 -0500 @@ -1998,6 +1998,7 @@ if (n) spin_unlock(&n->list_lock); + preempt_enable(); while (discard_page) { page = discard_page; discard_page = discard_page->next; @@ -2006,6 +2007,7 @@ discard_slab(s, page); stat(s, FREE_SLAB); } + preempt_disable(); #endif } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>