Hi Yeah, the __ClearPageReserved flag is cleared for each page, But the memblock still mark these physical address as marked, Then if you cat /sys/kernel/debug/memblock/reserved You can still see these physical address are marked as reserved, This is not correct, This is because cma_activate_area function release the pages after Boot_mem free, so we have to free the memblock by ourselves, The same problem also reside for initrd reserved memory. -----Original Message----- From: Michal Hocko [mailto:mstsxfx@xxxxxxxxx] On Behalf Of Michal Hocko Sent: Wednesday, September 10, 2014 4:18 PM To: Wang, Yalin Cc: 'linux-mm@xxxxxxxxx'; 'akpm@xxxxxxxxxxxxxxxxxxxx'; mm-commits@xxxxxxxxxxxxxxx; hughd@xxxxxxxxxx; b.zolnierkie@xxxxxxxxxxx Subject: Re: [RFC] Free the reserved memblock when free cma pages On Tue 09-09-14 14:13:58, Wang, Yalin wrote: > This patch add memblock_free to also free the reserved memblock, so > that the cma pages are not marked as reserved memory in > /sys/kernel/debug/memblock/reserved debug file Why and is this even correct? init_cma_reserved_pageblock seems to be doing __ClearPageReserved on each page in the page block. > Signed-off-by: Yalin Wang <yalin.wang@xxxxxxxxxxxxxx> > --- > mm/cma.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index c17751c..f3ec756 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -114,6 +114,8 @@ static int __init cma_activate_area(struct cma *cma) > goto err; > } > init_cma_reserved_pageblock(pfn_to_page(base_pfn)); > + memblock_free(__pfn_to_phys(base_pfn), > + pageblock_nr_pages * PAGE_SIZE); > } while (--i); > > mutex_init(&cma->lock); > -- > 2.1.0 -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href