On 22.06.20 15:10, Wei Yang wrote: > On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote: >> On 22.06.20 11:22, Wei Yang wrote: >>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote: >>>> On 22.06.20 10:26, Wei Yang wrote: >>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote: >>>>>> Especially with memory hotplug, we can have offline sections (with a >>>>>> garbage memmap) and overlapping zones. We have to make sure to only >>>>>> touch initialized memmaps (online sections managed by the buddy) and that >>>>>> the zone matches, to not move pages between zones. >>>>>> >>>>>> To test if this can actually happen, I added a simple >>>>>> BUG_ON(page_zone(page_i) != page_zone(page_j)); >>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and >>>>>> onlining the first memory block "online_movable" and the second memory >>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL >>>>>> and MOVABLE) overlap. >>>>>> >>>>>> This might result in all kinds of weird situations (e.g., double >>>>>> allocations, list corruptions, unmovable allocations ending up in the >>>>>> movable zone). >>>>>> >>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization") >>>>>> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >>>>>> Cc: stable@xxxxxxxxxxxxxxx # v5.2+ >>>>>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>>>>> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> >>>>>> Cc: Michal Hocko <mhocko@xxxxxxxx> >>>>>> Cc: Minchan Kim <minchan@xxxxxxxxxx> >>>>>> Cc: Huang Ying <ying.huang@xxxxxxxxx> >>>>>> Cc: Wei Yang <richard.weiyang@xxxxxxxxx> >>>>>> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> >>>>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >>>>>> --- >>>>>> mm/shuffle.c | 18 +++++++++--------- >>>>>> 1 file changed, 9 insertions(+), 9 deletions(-) >>>>>> >>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c >>>>>> index 44406d9977c77..dd13ab851b3ee 100644 >>>>>> --- a/mm/shuffle.c >>>>>> +++ b/mm/shuffle.c >>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400); >>>>>> * For two pages to be swapped in the shuffle, they must be free (on a >>>>>> * 'free_area' lru), have the same order, and have the same migratetype. >>>>>> */ >>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order) >>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone, >>>>>> + unsigned long pfn, int order) >>>>>> { >>>>>> - struct page *page; >>>>>> + struct page *page = pfn_to_online_page(pfn); >>>>> >>>>> Hi, David and Dan, >>>>> >>>>> One thing I want to confirm here is we won't have partially online section, >>>>> right? We can add a sub-section to system, but we won't manage it by buddy. >>>> >>>> Hi, >>>> >>>> there is still a BUG with sub-section hot-add (devmem), which broke >>>> pfn_to_online_page() in corner cases (especially, see the description in >>>> include/linux/mmzone.h). We can have a boot-memory section partially >>>> populated and marked online. Then, we can hot-add devmem, marking the >>>> remaining pfns valid - and as the section is maked online, also as online. >>> >>> Oh, yes, I see this description. >>> >>> This means we could have section marked as online, but with a sub-section even >>> not added. >>> >>> While the good news is even the sub-section is not added, but its memmap is >>> populated for an early section. So the page returned from pfn_to_online_page() >>> is a valid one. >>> >>> But what would happen, if the sub-section is removed after added? Would >>> section_deactivate() release related memmap to this "struct page"? >> >> If devmem is removed, the memmap will be freed and the sub-sections are >> marked as non-present. So this works as expected. >> > > Sorry, I may not catch your point. If my understanding is correct, the > above behavior happens in function section_deactivate(). > > Let me draw my understanding of function section_deactivate(): > > section_deactivate(pfn, nr_pages) > clear_subsection_map(pfn, nr_pages) > depopulate_section_memmap(pfn, nr_pages) > > Since we just remove a sub-section, I skipped some un-related codes. These two > functions would: > > * clear bitmap in ms->usage->subsection_map > * free memmap for the sub-section > > While since the section is not empty, ms->section_mem_map is not set no null. Let me clarify, sub-section hotremove works differently when overlying with (online) boot memory within a section. Early sections (IOW, boot memory) are never partially removed. See mm/sparse.c:section_deactivate(). We only free a early memmap when the section is completely empty. Also see how include/linux/mmzone.h:pfn_valid() handles early sections. So when we have a partially present section with boot memory, we a) marked the whole section present and online (there is only a single bit) b) allocated the memmap for the whole section c) Only exposed the relevant pages to the buddy. The memmap of non- present parts in a section were initialized and are reserved. pfn_valid() will return for all non-present pfns valid, because there is a memmap. pfn_to_online_page() will return for all pfns "true", because we only have a single bit for the whole section. This has been the case before sub-section hotplug and is still the case. It simply looks like just another memory hole for which we have a memmap. Now, with devmem it is possible to suddenly change these sub-section holes (memmaps) to become ZONE_DEVICE memory. pfn_to_online_page() would have to detect that and report a "false". Possible fixes were already discussed (e.g., sub-section online map instead of a single bit). Again, the zone check safes us from the worst, just as in the case of all other pfn walkers that use (as documented) pfn_to_online_page(). It still needs a fix as dicussed, but it seems to work reasonably fine like that for now. -- Thanks, David / dhildenb