On 25.10.20 11:15, Mike Rapoport wrote: > From: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the > kernel direct mapping after free_pages(). The pages than need to be > mapped back before they could be used. Theese mapping operations use > __kernel_map_pages() guarded with with debug_pagealloc_enabled(). > > The only place that calls __kernel_map_pages() without checking > whether DEBUG_PAGEALLOC is enabled is the hibernation code that > presumes availability of this function when ARCH_HAS_SET_DIRECT_MAP > is set. Still, on arm64, __kernel_map_pages() will bail out when > DEBUG_PAGEALLOC is not enabled but set_direct_map_invalid_noflush() > may render some pages not present in the direct map and hibernation > code won't be able to save such pages. > > To make page allocation debugging and hibernation interaction more > robust, the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP > has to be made more explicit. > > Start with combining the guard condition and the call to > __kernel_map_pages() into a single debug_pagealloc_map_pages() > function to emphasize that __kernel_map_pages() should not be called > without DEBUG_PAGEALLOC and use this new function to map/unmap pages > when page allocation debug is enabled. > > As the only remaining user of kernel_map_pages() is the hibernation > code, mode that function into kernel/power/snapshot.c closer to a > caller. s/mode/move/ > > Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> --- > include/linux/mm.h | 16 +++++++--------- kernel/power/snapshot.c > | 11 +++++++++++ mm/memory_hotplug.c | 3 +-- mm/page_alloc.c > | 6 ++---- mm/slab.c | 8 +++----- 5 files changed, 24 > insertions(+), 20 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h index > ef360fe70aaf..14e397f3752c 100644 --- a/include/linux/mm.h +++ > b/include/linux/mm.h @@ -2927,21 +2927,19 @@ static inline bool > debug_pagealloc_enabled_static(void) #if > defined(CONFIG_DEBUG_PAGEALLOC) || > defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) extern void > __kernel_map_pages(struct page *page, int numpages, int enable); > > -/* - * When called in DEBUG_PAGEALLOC context, the call should most > likely be - * guarded by debug_pagealloc_enabled() or > debug_pagealloc_enabled_static() - */ -static inline void > -kernel_map_pages(struct page *page, int numpages, int enable) > +static inline void debug_pagealloc_map_pages(struct page *page, + > int numpages, int enable) { - __kernel_map_pages(page, numpages, > enable); + if (debug_pagealloc_enabled_static()) + > __kernel_map_pages(page, numpages, enable); } + #ifdef > CONFIG_HIBERNATION extern bool kernel_page_present(struct page > *page); #endif /* CONFIG_HIBERNATION */ #else /* > CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ -static > inline void -kernel_map_pages(struct page *page, int numpages, int > enable) {} +static inline void debug_pagealloc_map_pages(struct page > *page, + int numpages, int enable) {} #ifdef > CONFIG_HIBERNATION static inline bool kernel_page_present(struct page > *page) { return true; } #endif /* CONFIG_HIBERNATION */ diff --git > a/kernel/power/snapshot.c b/kernel/power/snapshot.c index > 46b1804c1ddf..fa499466f645 100644 --- a/kernel/power/snapshot.c +++ > b/kernel/power/snapshot.c @@ -76,6 +76,17 @@ static inline void > hibernate_restore_protect_page(void *page_address) {} static inline > void hibernate_restore_unprotect_page(void *page_address) {} #endif > /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */ > > +#if defined(CONFIG_DEBUG_PAGEALLOC) || > defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) +static inline void > +kernel_map_pages(struct page *page, int numpages, int enable) +{ + > __kernel_map_pages(page, numpages, enable); +} +#else +static inline > void +kernel_map_pages(struct page *page, int numpages, int enable) > {} +#endif + That change should go into a separate patch. For the debug_pagealloc_map_pages() parts Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> -- Thanks, David / dhildenb