On 27 Jan 2021, at 5:18, David Hildenbrand wrote: > Right now, if activation fails, we might already have exposed some pages to > the buddy for CMA use (although they will never get actually used by CMA), > and some pages won't be exposed to the buddy at all. > > Let's check for "single zone" early and on error, don't expose any pages > for CMA use - instead, expose them to the buddy available for any use. > Simply call free_reserved_page() on every single page - easier than > going via free_reserved_area(), converting back and forth between pfns > and virt addresses. > > In addition, make sure to fixup totalcma_pages properly. > > Example: 6 GiB QEMU VM with "... hugetlb_cma=2G movablecore=20% ...": > [ 0.006891] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node > [ 0.006893] cma: Reserved 2048 MiB at 0x0000000100000000 > [ 0.006893] hugetlb_cma: reserved 2048 MiB on node 0 > ... > [ 0.175433] cma: CMA area hugetlb0 could not be activated > > Before this patch: > # cat /proc/meminfo > MemTotal: 5867348 kB > MemFree: 5692808 kB > MemAvailable: 5542516 kB > ... > CmaTotal: 2097152 kB > CmaFree: 1884160 kB > > After this patch: > # cat /proc/meminfo > MemTotal: 6077308 kB > MemFree: 5904208 kB > MemAvailable: 5747968 kB > ... > CmaTotal: 0 kB > CmaFree: 0 kB > > Note: cma_init_reserved_mem() makes sure that we always cover full > pageblocks / MAX_ORDER - 1 pages. > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx> > Cc: Mike Rapoport <rppt@xxxxxxxxxx> > Cc: Oscar Salvador <osalvador@xxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/cma.c | 43 +++++++++++++++++++++---------------------- > 1 file changed, 21 insertions(+), 22 deletions(-) LGTM. Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> — Best Regards, Yan Zi
Attachment:
signature.asc
Description: OpenPGP digital signature