Re: [PATCH] mm: Fix false softlockup during pfn range removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.06.20 01:12, Ben Widawsky wrote:
> When working with very large nodes, poisoning the struct pages (for
> which there will be very many) can take a very long time. If the system
> is using voluntary preemptions, the software watchdog will not be able
> to detect forward progress. This patch addresses this issue by offering
> to give up time like __remove_pages() does.  This behavior was
> introduced in v5.6 with:
> commit d33695b16a9f ("mm/memory_hotplug: poison memmap in remove_pfn_range_from_zone()")
> 
> Alternately, init_page_poison could do this cond_resched(), but it seems
> to me that the caller of init_page_poison() is what actually knows
> whether or not it should relax its own priority.
> 
> Based on Dan's notes, I think this is perfectly safe:
> commit f931ab479dd2 ("mm: fix devm_memremap_pages crash, use mem_hotplug_{begin, done}")
> 
> Aside from fixing the lockup, it is also a friendlier thing to do on
> lower core systems that might wipe out large chunks of hotplug memory
> (probably not a very common case).

BTW, I think this is even a fix for !VMEMMAP. page_init_poison() will
just do a memset. This is only guaranteed to work on section basis
correctly without SPARSE_VMEMMAP.

Thanks!

-- 
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux