On Tue, Jun 19, 2012 at 2:26 PM, Tejun Heo <tj@xxxxxxxxxx> wrote: > On Tue, Jun 19, 2012 at 02:20:59PM -0700, Tejun Heo wrote: >> Something like the following should fix it. >> >> diff --git a/mm/memblock.c b/mm/memblock.c >> index 32a0a5e..2770970 100644 >> --- a/mm/memblock.c >> +++ b/mm/memblock.c >> @@ -148,11 +148,15 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start, >> */ >> int __init_memblock memblock_free_reserved_regions(void) >> { >> +#ifndef CONFIG_DEBUG_PAGEALLOC >> if (memblock.reserved.regions == memblock_reserved_init_regions) >> return 0; >> >> return memblock_free(__pa(memblock.reserved.regions), >> sizeof(struct memblock_region) * memblock.reserved.max); >> +#else >> + return 0; >> +#endif > > BTW, this is just ugly and I don't think we're saving any noticeable > amount by doing this "free - give it to page allocator - reserve > again" dancing. We should just allocate regions aligned to page > boundaries and free them later when memblock is no longer in use. if it is that case, that change could fix other problem problem too. --- during the one free reserved.regions could double the array. please check attached patch. Yinghai
Attachment:
fix_free_memblock_reserve_v4.patch
Description: Binary data