The patch titled Subject: mm, memory_hotplug: is_mem_section_removable do not pass the end of a zone has been added to the -mm tree. Its filename is mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Michal Hocko <mhocko@xxxxxxxx> Subject: mm, memory_hotplug: is_mem_section_removable do not pass the end of a zone Patch series "mm, memory_hotplug: fix uninitialized pages fallouts". Mikhail Zaslonko has posted fixes for the two bugs quite some time ago [1]. I have pushed back on those fixes because I believed that it is much better to plug the problem at the initialization time rather than play whack-a-mole all over the hotplug code and find all the places which expect the full memory section to be initialized. We have ended up with 2830bf6f05fb ("mm, memory_hotplug: initialize struct pages for the full memory section") merged and cause a regression [2][3]. The reason is that there might be memory layouts when two NUMA nodes share the same memory section so the merged fix is simply incorrect. In order to plug this hole we really have to be zone range aware in those handlers. I have split up the original patch into two. One is unchanged (patch 2) and I took a different approach for `removable' crash. It would be great if Mikhail could test it still works for his memory layout. [1] http://lkml.kernel.org/r/20181105150401.97287-2-zaslonko@xxxxxxxxxxxxx [2] https://bugzilla.redhat.com/show_bug.cgi?id=1666948 [3] http://lkml.kernel.org/r/20190125163938.GA20411@xxxxxxxxxxxxxx This patch (of 2): Mikhail has reported the following VM_BUG_ON triggered when reading sysfs removable state of a memory block: page:000003d082008000 is uninitialized and poisoned page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) Call Trace: ([<0000000000385b26>] test_pages_in_a_zone+0xde/0x160) [<00000000008f15c4>] show_valid_zones+0x5c/0x190 [<00000000008cf9c4>] dev_attr_show+0x34/0x70 [<0000000000463ad0>] sysfs_kf_seq_show+0xc8/0x148 [<00000000003e4194>] seq_read+0x204/0x480 [<00000000003b53ea>] __vfs_read+0x32/0x178 [<00000000003b55b2>] vfs_read+0x82/0x138 [<00000000003b5be2>] ksys_read+0x5a/0xb0 [<0000000000b86ba0>] system_call+0xdc/0x2d8 Last Breaking-Event-Address: [<0000000000385b26>] test_pages_in_a_zone+0xde/0x160 Kernel panic - not syncing: Fatal exception: panic_on_oops The reason is that the memory block spans the zone boundary and we are stumbling over an unitialized struct page. Fix this by enforcing zone range in is_mem_section_removable so that we never run away from a zone. Link: http://lkml.kernel.org/r/20190128144506.15603-2-mhocko@xxxxxxxxxx Signed-off-by: Michal Hocko <mhocko@xxxxxxxx> Reported-by: Mikhail Zaslonko <zaslonko@xxxxxxxxxxxxx> Debugged-by: Mikhail Zaslonko <zaslonko@xxxxxxxxxxxxx> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx> Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx> Cc: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory_hotplug.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/memory_hotplug.c~mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone +++ a/mm/memory_hotplug.c @@ -1246,7 +1246,8 @@ static bool is_pageblock_removable_noloc bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages) { struct page *page = pfn_to_page(start_pfn); - struct page *end_page = page + nr_pages; + unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page))); + struct page *end_page = pfn_to_page(end_pfn); /* Check the starting page of each pageblock within the range */ for (; page < end_page; page = next_active_pageblock(page)) { _ Patches currently in -mm which might be from mhocko@xxxxxxxx are mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone.patch mm-oom-marks-all-killed-tasks-as-oom-victims.patch memcg-do-not-report-racy-no-eligible-oom-tasks.patch