On Mon, Jun 17, 2019 at 6:42 PM Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> wrote: > > On Wed, Jun 05, 2019 at 02:58:04PM -0700, Dan Williams wrote: > >Sub-section hotplug support reduces the unit of operation of hotplug > >from section-sized-units (PAGES_PER_SECTION) to sub-section-sized units > >(PAGES_PER_SUBSECTION). Teach shrink_{zone,pgdat}_span() to consider > >PAGES_PER_SUBSECTION boundaries as the points where pfn_valid(), not > >valid_section(), can toggle. > > > >Cc: Michal Hocko <mhocko@xxxxxxxx> > >Cc: Vlastimil Babka <vbabka@xxxxxxx> > >Cc: Logan Gunthorpe <logang@xxxxxxxxxxxx> > >Reviewed-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> > >Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > >Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> > >--- > > mm/memory_hotplug.c | 29 ++++++++--------------------- > > 1 file changed, 8 insertions(+), 21 deletions(-) > > > >diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > >index 7b963c2d3a0d..647859a1d119 100644 > >--- a/mm/memory_hotplug.c > >+++ b/mm/memory_hotplug.c > >@@ -318,12 +318,8 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > > unsigned long start_pfn, > > unsigned long end_pfn) > > { > >- struct mem_section *ms; > >- > >- for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { > >- ms = __pfn_to_section(start_pfn); > >- > >- if (unlikely(!valid_section(ms))) > >+ for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { > >+ if (unlikely(!pfn_valid(start_pfn))) > > continue; > > Hmm, we change the granularity of valid section from SECTION to SUBSECTION. > But we didn't change the granularity of node id and zone information. > > For example, we found the node id of a pfn mismatch, we can skip the whole > section instead of a subsection. > > Maybe this is not a big deal. I don't see a problem.