On Mon, Mar 26, 2018 at 03:47:03PM -0700, David Rientjes wrote: >On Tue, 27 Mar 2018, Wei Yang wrote: > >> >> In 'commit c4e1be9ec113 ("mm, sparsemem: break out of loops early")', >> >> __highest_present_section_nr is introduced to reduce the loop counts for >> >> present section. This is also helpful for usemap and memmap allocation. >> >> >> >> This patch uses __highest_present_section_nr + 1 to optimize the loop. >> >> >> >> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> >> >> --- >> >> mm/sparse.c | 2 +- >> >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> >> >> diff --git a/mm/sparse.c b/mm/sparse.c >> >> index 7af5e7a92528..505050346249 100644 >> >> --- a/mm/sparse.c >> >> +++ b/mm/sparse.c >> >> @@ -561,7 +561,7 @@ static void __init alloc_usemap_and_memmap(void (*alloc_func) >> >> map_count = 1; >> >> } >> >> /* ok, last chunk */ >> >> - alloc_func(data, pnum_begin, NR_MEM_SECTIONS, >> >> + alloc_func(data, pnum_begin, __highest_present_section_nr+1, >> >> map_count, nodeid_begin); >> >> } >> >> >> > >> >What happens if s/NR_MEM_SECTIONS/pnum/? >> >> I have tried this :-) >> >> The last pnum is -1 from next_present_section_nr(). >> > >Lol. I think it would make more sense for the second patch to come before >the first, but feel free to add > Thanks for your comment. Do I need to reorder the patch and send v2? >Acked-by: David Rientjes <rientjes@xxxxxxxxxx> -- Wei Yang Help you, Help me