On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote: >On 22.06.20 10:26, Wei Yang wrote: >> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote: >>> Especially with memory hotplug, we can have offline sections (with a >>> garbage memmap) and overlapping zones. We have to make sure to only >>> touch initialized memmaps (online sections managed by the buddy) and that >>> the zone matches, to not move pages between zones. >>> >>> To test if this can actually happen, I added a simple >>> BUG_ON(page_zone(page_i) != page_zone(page_j)); >>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and >>> onlining the first memory block "online_movable" and the second memory >>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL >>> and MOVABLE) overlap. >>> >>> This might result in all kinds of weird situations (e.g., double >>> allocations, list corruptions, unmovable allocations ending up in the >>> movable zone). >>> >>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization") >>> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >>> Cc: stable@xxxxxxxxxxxxxxx # v5.2+ >>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> >>> Cc: Michal Hocko <mhocko@xxxxxxxx> >>> Cc: Minchan Kim <minchan@xxxxxxxxxx> >>> Cc: Huang Ying <ying.huang@xxxxxxxxx> >>> Cc: Wei Yang <richard.weiyang@xxxxxxxxx> >>> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> >>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >>> --- >>> mm/shuffle.c | 18 +++++++++--------- >>> 1 file changed, 9 insertions(+), 9 deletions(-) >>> >>> diff --git a/mm/shuffle.c b/mm/shuffle.c >>> index 44406d9977c77..dd13ab851b3ee 100644 >>> --- a/mm/shuffle.c >>> +++ b/mm/shuffle.c >>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400); >>> * For two pages to be swapped in the shuffle, they must be free (on a >>> * 'free_area' lru), have the same order, and have the same migratetype. >>> */ >>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order) >>> +static struct page * __meminit shuffle_valid_page(struct zone *zone, >>> + unsigned long pfn, int order) >>> { >>> - struct page *page; >>> + struct page *page = pfn_to_online_page(pfn); >> >> Hi, David and Dan, >> >> One thing I want to confirm here is we won't have partially online section, >> right? We can add a sub-section to system, but we won't manage it by buddy. > >Hi, > >there is still a BUG with sub-section hot-add (devmem), which broke >pfn_to_online_page() in corner cases (especially, see the description in >include/linux/mmzone.h). We can have a boot-memory section partially >populated and marked online. Then, we can hot-add devmem, marking the >remaining pfns valid - and as the section is maked online, also as online. Oh, yes, I see this description. This means we could have section marked as online, but with a sub-section even not added. While the good news is even the sub-section is not added, but its memmap is populated for an early section. So the page returned from pfn_to_online_page() is a valid one. But what would happen, if the sub-section is removed after added? Would section_deactivate() release related memmap to this "struct page"? > >This is, however, a different problem to solve and affects most other >pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us >from most harm, as the devmem zone won't match. > Yes, a different problem, just jump into my mind. Hope this won't affect this patch. >Thanks! > >-- >Thanks, > >David / dhildenb -- Wei Yang Help you, Help me