Patch "mm: introduce debug_pagealloc_{map,unmap}_pages() helpers" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm: introduce debug_pagealloc_{map,unmap}_pages() helpers

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-introduce-debug_pagealloc_-map-unmap-_pages-helpe.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit a25514ea7731b0f9a21c67b2a8de9787b31dcd83
Author: Mike Rapoport <rppt@xxxxxxxxxx>
Date:   Mon Dec 14 19:10:20 2020 -0800

    mm: introduce debug_pagealloc_{map,unmap}_pages() helpers
    
    [ Upstream commit 77bc7fd607dee2ffb28daff6d0dd8ae42af61ea8 ]
    
    Patch series "arch, mm: improve robustness of direct map manipulation", v7.
    
    During recent discussion about KVM protected memory, David raised a
    concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC
    scope [1].
    
    Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
    possible that __kernel_map_pages() would fail, but since this function is
    void, the failure will go unnoticed.
    
    Moreover, there's lack of consistency of __kernel_map_pages() semantics
    across architectures as some guard this function with #ifdef
    DEBUG_PAGEALLOC, some refuse to update the direct map if page allocation
    debugging is disabled at run time and some allow modifying the direct map
    regardless of DEBUG_PAGEALLOC settings.
    
    This set straightens this out by restoring dependency of
    __kernel_map_pages() on DEBUG_PAGEALLOC and updating the call sites
    accordingly.
    
    Since currently the only user of __kernel_map_pages() outside
    DEBUG_PAGEALLOC is hibernation, it is updated to make direct map accesses
    there more explicit.
    
    [1] https://lore.kernel.org/lkml/2759b4bf-e1e3-d006-7d86-78a40348269d@xxxxxxxxxx
    
    This patch (of 4):
    
    When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
    direct mapping after free_pages().  The pages than need to be mapped back
    before they could be used.  Theese mapping operations use
    __kernel_map_pages() guarded with with debug_pagealloc_enabled().
    
    The only place that calls __kernel_map_pages() without checking whether
    DEBUG_PAGEALLOC is enabled is the hibernation code that presumes
    availability of this function when ARCH_HAS_SET_DIRECT_MAP is set.  Still,
    on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is not
    enabled but set_direct_map_invalid_noflush() may render some pages not
    present in the direct map and hibernation code won't be able to save such
    pages.
    
    To make page allocation debugging and hibernation interaction more robust,
    the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be
    made more explicit.
    
    Start with combining the guard condition and the call to
    __kernel_map_pages() into debug_pagealloc_map_pages() and
    debug_pagealloc_unmap_pages() functions to emphasize that
    __kernel_map_pages() should not be called without DEBUG_PAGEALLOC and use
    these new functions to map/unmap pages when page allocation debugging is
    enabled.
    
    Link: https://lkml.kernel.org/r/20201109192128.960-1-rppt@xxxxxxxxxx
    Link: https://lkml.kernel.org/r/20201109192128.960-2-rppt@xxxxxxxxxx
    Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
    Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
    Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
    Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
    Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx>
    Cc: Andy Lutomirski <luto@xxxxxxxxxx>
    Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
    Cc: Borislav Petkov <bp@xxxxxxxxx>
    Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
    Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx>
    Cc: Christoph Lameter <cl@xxxxxxxxx>
    Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
    Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
    Cc: David Rientjes <rientjes@xxxxxxxxxx>
    Cc: "Edgecombe, Rick P" <rick.p.edgecombe@xxxxxxxxx>
    Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
    Cc: Heiko Carstens <hca@xxxxxxxxxxxxx>
    Cc: Ingo Molnar <mingo@xxxxxxxxxx>
    Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
    Cc: Len Brown <len.brown@xxxxxxxxx>
    Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx>
    Cc: Paul Mackerras <paulus@xxxxxxxxx>
    Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx>
    Cc: Pavel Machek <pavel@xxxxxx>
    Cc: Pekka Enberg <penberg@xxxxxxxxxx>
    Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
    Cc: "Rafael J. Wysocki" <rjw@xxxxxxxxxxxxx>
    Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx>
    Cc: Will Deacon <will@xxxxxxxxxx>
    Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
    Stable-dep-of: fb1cf0878328 ("riscv: rewrite __kernel_map_pages() to fix sleeping in invalid context")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b8b677f47a8da..4b9c1b3656a49 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2967,12 +2967,27 @@ kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	__kernel_map_pages(page, numpages, enable);
 }
+
+static inline void debug_pagealloc_map_pages(struct page *page, int numpages)
+{
+	if (debug_pagealloc_enabled_static())
+		__kernel_map_pages(page, numpages, 1);
+}
+
+static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages)
+{
+	if (debug_pagealloc_enabled_static())
+		__kernel_map_pages(page, numpages, 0);
+}
+
 #ifdef CONFIG_HIBERNATION
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
 #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable) {}
+static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {}
+static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {}
 #ifdef CONFIG_HIBERNATION
 static inline bool kernel_page_present(struct page *page) { return true; }
 #endif	/* CONFIG_HIBERNATION */
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 553b0705dce8e..a29b134790596 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -596,8 +596,7 @@ void generic_online_page(struct page *page, unsigned int order)
 	 * so we should map it first. This is better than introducing a special
 	 * case in page freeing fast path.
 	 */
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 1);
+	debug_pagealloc_map_pages(page, 1 << order);
 	__free_pages_core(page, order);
 	totalram_pages_add(1UL << order);
 #ifdef CONFIG_HIGHMEM
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ed66601044be5..5e02b4bc94a08 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1274,8 +1274,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	 */
 	arch_free_page(page, order);
 
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 0);
+	debug_pagealloc_unmap_pages(page, 1 << order);
 
 	kasan_free_nondeferred_pages(page, order);
 
@@ -2272,8 +2271,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 1);
+	debug_pagealloc_map_pages(page, 1 << order);
 	kasan_alloc_pages(page, order);
 	kernel_poison_pages(page, 1 << order, 1);
 	set_page_owner(page, order, gfp_flags);
diff --git a/mm/slab.c b/mm/slab.c
index b2cc2cf7d8a33..067ffc2939904 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1434,7 +1434,7 @@ static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
 	if (!is_debug_pagealloc_cache(cachep))
 		return;
 
-	kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
+	__kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
 }
 
 #else




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux