On Wed, 2019-04-17 at 11:39 -0700, Dan Williams wrote: > Prepare for hot{plug,remove} of sub-ranges of a section by tracking a > section active bitmask, each bit representing 2MB (SECTION_SIZE > (128M) / > map_active bitmask length (64)). If it turns out that 2MB is too > large > of an active tracking granularity it is trivial to increase the size > of > the map_active bitmap. > > The implications of a partially populated section is that pfn_valid() > needs to go beyond a valid_section() check and read the sub-section > active ranges from the bitmask. > > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Logan Gunthorpe <logang@xxxxxxxxxxxx> > Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> Hi Dan, I am still going through the patchset but: > +static unsigned long section_active_mask(unsigned long pfn, > + unsigned long nr_pages) > +{ > + int idx_start, idx_size; > + phys_addr_t start, size; > + > + if (!nr_pages) > + return 0; > + > + start = PFN_PHYS(pfn); > + size = PFN_PHYS(min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK))); We already picked the lowest value in section_active_init, didn't we? This min() operations seems redundant to me here. > + size = ALIGN(size, SECTION_ACTIVE_SIZE); > + > + idx_start = section_active_index(start); > + idx_size = section_active_index(size); > + > + if (idx_size == 0) > + return -1; > + return ((1UL << idx_size) - 1) << idx_start; > +} > + > +void section_active_init(unsigned long pfn, unsigned long nr_pages) > +{ > + int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); > + int i, start_sec = pfn_to_section_nr(pfn); > + > + if (!nr_pages) > + return; > + > + for (i = start_sec; i <= end_sec; i++) { > + struct mem_section *ms; > + unsigned long mask; > + unsigned long pfns; s/pfns/nr_pfns/ instead? > + pfns = min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK)); > + mask = section_active_mask(pfn, pfns); > + > + ms = __nr_to_section(i); > + pr_debug("%s: sec: %d mask: %#018lx\n", __func__, i, > mask); > + ms->usage->map_active = mask; > + > + pfn += pfns; > + nr_pages -= pfns; > + } > +} Although the code is not very complicated, it could use some comments here and there I think. > + > /* Record a memory area against a node. */ > void __init memory_present(int nid, unsigned long start, unsigned > long end) > { > -- Oscar Salvador SUSE L3