On Thu, Sep 22, 2016 at 05:59:46PM +0200, Vlastimil Babka wrote: > On 09/22/2016 08:50 AM, Joonsoo Kim wrote: > >On Thu, Sep 22, 2016 at 02:45:46PM +0900, Joonsoo Kim wrote: > >>> > >>> > /* Free whole pageblock and set its migration type to MIGRATE_CMA. */ > >>> > void __init init_cma_reserved_pageblock(struct page *page) > >>> > { > >>> > unsigned i = pageblock_nr_pages; > >>> >+ unsigned long pfn = page_to_pfn(page); > >>> > struct page *p = page; > >>> >+ int nid = page_to_nid(page); > >>> >+ > >>> >+ /* > >>> >+ * ZONE_CMA will steal present pages from other zones by changing > >>> >+ * page links so page_zone() is changed. Before that, > >>> >+ * we need to adjust previous zone's page count first. > >>> >+ */ > >>> >+ adjust_present_page_count(page, -pageblock_nr_pages); > >>> > > >>> > do { > >>> > __ClearPageReserved(p); > >>> > set_page_count(p, 0); > >>> >- } while (++p, --i); > >>> >+ > >>> >+ /* Steal pages from other zones */ > >>> >+ set_page_links(p, ZONE_CMA, nid, pfn); > >>> >+ } while (++p, ++pfn, --i); > >>> >+ > >>> >+ adjust_present_page_count(page, pageblock_nr_pages); > >>> > >>> This seems to assign pages to ZONE_CMA on the proper node, which is > >>> good. But then ZONE_CMA on multiple nodes will have unnecessary > >>> holes in the spanned pages, as each will contain only a subset. > >> > >>True, I will fix it and respin the series. > > > >I now realize that it's too late to send full series for next > >merge window. I will send full series after next merge window is closed. > > I think there might still be rc8 thus another week. Indeed. I will send full series, soon. > > >Anyway, I'd like to confirm that following incremental patch will solve > >your concern. > > Yeah that should work, as long as single cma areas don't include multiple nodes? Single cma areas cannot include multiple nodes at least until now. There is a check that single cma area is on a single zone. Thanks. > > >Thanks. > > > > > >------>8-------------- > > mm/cma.c | 25 ++++++++++++++++--------- > > 1 file changed, 16 insertions(+), 9 deletions(-) > > > >diff --git a/mm/cma.c b/mm/cma.c > >index d69bdf7..8375554 100644 > >--- a/mm/cma.c > >+++ b/mm/cma.c > >@@ -146,22 +146,29 @@ static int __init cma_init_reserved_areas(void) > > { > > int i; > > struct zone *zone; > >- unsigned long start_pfn = UINT_MAX, end_pfn = 0; > >+ pg_data_t *pgdat; > > > > if (!cma_area_count) > > return 0; > > > >- for (i = 0; i < cma_area_count; i++) { > >- if (start_pfn > cma_areas[i].base_pfn) > >- start_pfn = cma_areas[i].base_pfn; > >- if (end_pfn < cma_areas[i].base_pfn + cma_areas[i].count) > >- end_pfn = cma_areas[i].base_pfn + cma_areas[i].count; > >- } > >+ for_each_online_pgdat(pgdat) { > >+ unsigned long start_pfn = UINT_MAX, end_pfn = 0; > > > >- for_each_zone(zone) { > >- if (!is_zone_cma(zone)) > >+ for (i = 0; i < cma_area_count; i++) { > >+ if (page_to_nid(pfn_to_page(cma_areas[i].base_pfn)) != > > We have pfn_to_nid() (although the implementation is just like this). Will fix. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>