[PATCH 53/62] mm/slub: Remove pfmemalloc_match_unsafe()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



slab_test_pfmemalloc() doesn't need to check PageSlab() (unlike
PageSlabPfmemalloc()), so we don't need a pfmemalloc_match_unsafe()
variant any more.

Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
---
 mm/slub.c | 15 +--------------
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 7e2c5342196a..229fc56809c2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2796,19 +2796,6 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags)
 	return true;
 }
 
-/*
- * A variant of pfmemalloc_match() that tests page flags without asserting
- * PageSlab. Intended for opportunistic checks before taking a lock and
- * rechecking that nobody else freed the page under us.
- */
-static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags)
-{
-	if (unlikely(__PageSlabPfmemalloc(page)))
-		return gfp_pfmemalloc_allowed(gfpflags);
-
-	return true;
-}
-
 /*
  * Check the freelist of a slab and either transfer the freelist to the
  * per cpu freelist or deactivate the slab
@@ -2905,7 +2892,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	 * PFMEMALLOC but right now, we lose the pfmemalloc
 	 * information when the page leaves the per-cpu allocator
 	 */
-	if (unlikely(!pfmemalloc_match_unsafe(slab_page(slab), gfpflags)))
+	if (unlikely(!pfmemalloc_match(slab, gfpflags)))
 		goto deactivate_slab;
 
 	/* must check again c->slab in case we got preempted and it changed */
-- 
2.32.0





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux