On Tue, Jun 19, 2012 at 02:20:59PM -0700, Tejun Heo wrote: > Something like the following should fix it. > > diff --git a/mm/memblock.c b/mm/memblock.c > index 32a0a5e..2770970 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -148,11 +148,15 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start, > */ > int __init_memblock memblock_free_reserved_regions(void) > { > +#ifndef CONFIG_DEBUG_PAGEALLOC > if (memblock.reserved.regions == memblock_reserved_init_regions) > return 0; > > return memblock_free(__pa(memblock.reserved.regions), > sizeof(struct memblock_region) * memblock.reserved.max); > +#else > + return 0; > +#endif BTW, this is just ugly and I don't think we're saving any noticeable amount by doing this "free - give it to page allocator - reserve again" dancing. We should just allocate regions aligned to page boundaries and free them later when memblock is no longer in use. Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>