Hi Christoph, On Sat, 8 Sep 2012 18:27:10 +0000 Christoph Lameter <cl@xxxxxxxxx> wrote: > > Thanks Tony for the additional information regarding the pointer. That got > me thinking about something different. > > Try the following fix: > > Subject: slub: Zero initial memory segment for kmem_cache and kmem_cache_node > > Earlier patches in the common set moved the zeroing of the kmem_cache structure > into common code. See "Move allocation of kmem_cache into common code". > > The allocation for the two special structures is still done > from slub specific code but no zeroing is done since the cache creation functions > used to zero. This now needs to be updated so that the structures > are zeroed during allocation in kmem_cache_init(). > Otherwise random pointer values may be followed. > > Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> > > Index: linux/mm/slub.c > =================================================================== > --- linux.orig/mm/slub.c 2012-09-08 13:21:33.523056357 -0500 > +++ linux/mm/slub.c 2012-09-08 13:22:12.483056947 -0500 > @@ -3705,7 +3705,7 @@ > /* Allocate two kmem_caches from the page allocator */ > kmalloc_size = ALIGN(kmem_size, cache_line_size()); > order = get_order(2 * kmalloc_size); > - kmem_cache = (void *)__get_free_pages(GFP_NOWAIT, order); > + kmem_cache = (void *)__get_free_pages(GFP_NOWAIT|__GFP_ZERO, order); > > /* > * Must first have the slab cache available for the allocations of the > I have added this as a merge fix patch to linux-next today in the anticipation that it will be added to the slab tree ASAP. -- Cheers, Stephen Rothwell sfr@xxxxxxxxxxxxxxxx
Attachment:
pgp74TMa6neln.pgp
Description: PGP signature