On Mon, 26 Mar 2018, Wei Yang wrote: > In 'commit c4e1be9ec113 ("mm, sparsemem: break out of loops early")', > __highest_present_section_nr is introduced to reduce the loop counts for > present section. This is also helpful for usemap and memmap allocation. > > This patch uses __highest_present_section_nr + 1 to optimize the loop. > > Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> > --- > mm/sparse.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/sparse.c b/mm/sparse.c > index 7af5e7a92528..505050346249 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -561,7 +561,7 @@ static void __init alloc_usemap_and_memmap(void (*alloc_func) > map_count = 1; > } > /* ok, last chunk */ > - alloc_func(data, pnum_begin, NR_MEM_SECTIONS, > + alloc_func(data, pnum_begin, __highest_present_section_nr+1, > map_count, nodeid_begin); > } > What happens if s/NR_MEM_SECTIONS/pnum/?