On Mon 21-01-19 11:38:49, Qian Cai wrote: > > > On 1/21/19 4:53 AM, Michal Hocko wrote: > > On Thu 17-01-19 21:16:50, Qian Cai wrote: [...] > >> Fixes: 2d070eab2e82 ("mm: consider zone which is not fully populated to > >> have holes") > > > > Did you mean > > Fixes: 9f1eb38e0e11 ("mm, kmemleak: little optimization while scanning") > > No, pfn_to_online_page() missed a few checks compared to pfn_valid() at least on > arm64 where the returned pfn is no longer valid (where pfn_valid() will skip those). > > 2d070eab2e82 introduced pfn_to_online_page(), so it was targeted to fix it. But it is 9f1eb38e0e11 which has replaced pfn_valid by pfn_to_online_page. > > > > >> Signed-off-by: Qian Cai <cai@xxxxxx> > >> --- > >> include/linux/memory_hotplug.h | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > >> index 07da5c6c5ba0..b8b36e6ac43b 100644 > >> --- a/include/linux/memory_hotplug.h > >> +++ b/include/linux/memory_hotplug.h > >> @@ -26,7 +26,7 @@ struct vmem_altmap; > >> struct page *___page = NULL; \ > >> unsigned long ___nr = pfn_to_section_nr(pfn); \ > >> \ > >> - if (___nr < NR_MEM_SECTIONS && online_section_nr(___nr))\ > >> + if (online_section_nr(___nr) && pfn_valid(pfn)) \ > >> ___page = pfn_to_page(pfn); \ > > > > Why have you removed the bound check? Is this safe? > > Regarding the fix, I am not really sure TBH. If the secion is online > > then we assume all struct pages to be initialized. If anything this > > should be limited to werid arches which might have holes so > > pfn_valid_within(). > > It looks to me at least on arm64 and x86_64, it has done this check in > pfn_valid() already. > > if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) > return 0 But an everflow could happen before pfn_valid is evaluated, no? -- Michal Hocko SUSE Labs