+ slob-only-use-list-functions-when-safe-to-do-so.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: slob: only use list functions when safe to do so
has been added to the -mm tree.  Its filename is
     slob-only-use-list-functions-when-safe-to-do-so.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/slob-only-use-list-functions-when-safe-to-do-so.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/slob-only-use-list-functions-when-safe-to-do-so.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: "Tobin C. Harding" <tobin@xxxxxxxxxx>
Subject: slob: only use list functions when safe to do so

Currently we call (indirectly) list_del() then we manually try to combat
the fact that the list may be in an undefined state by getting 'prev' and
'next' pointers in a somewhat contrived manner.  It is hard to verify that
this works for all initial states of the list.  Clearly the author (me)
got it wrong the first time because the 0day kernel testing robot managed
to crash the kernel thanks to this code.

All this is done in order to do an optimisation aimed at preventing
fragmentation at the start of a slab.  We can just skip this optimisation
any time the list is put into an undefined state since this only occurs
when an allocation completely fills the slab and in this case the
optimisation is unnecessary since we have not fragmented the slab by this
allocation.

Change the page pointer passed to slob_alloc_page() to be a double pointer
so that we can set it to NULL to indicate that the page was removed from
the list.  Skip the optimisation if the page was removed.

Found thanks to the kernel test robot, email subject:

	340d3d6178 ("mm/slob.c: respect list_head abstraction layer"):  kernel BUG at lib/list_debug.c:31!

Link: http://lkml.kernel.org/r/20190402032957.26249-2-tobin@xxxxxxxxxx
Signed-off-by: Tobin C. Harding <tobin@xxxxxxxxxx>
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Cc: Roman Gushchin <guro@xxxxxx>
Cc: Christoph Lameter <cl@xxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: Pekka Enberg <penberg@xxxxxxxxxx>
Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/slob.c |   50 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 30 insertions(+), 20 deletions(-)

--- a/mm/slob.c~slob-only-use-list-functions-when-safe-to-do-so
+++ a/mm/slob.c
@@ -213,10 +213,18 @@ static void slob_free_pages(void *b, int
 }
 
 /*
- * Allocate a slob block within a given slob_page sp.
+ * slob_page_alloc() - Allocate a slob block within a given slob_page sp.
+ * @spp: Page to look in, return parameter.
+ * @size: Size of the allocation.
+ * @align: Allocation alignment.
+ *
+ * Tries to find a chunk of memory at least @size within page.  If the
+ * allocation fills up page then page is removed from list, in this case
+ * *spp will be set to %NULL to signal that list removal occurred.
  */
-static void *slob_page_alloc(struct page *sp, size_t size, int align)
+static void *slob_page_alloc(struct page **spp, size_t size, int align)
 {
+	struct page *sp = *spp;
 	slob_t *prev, *cur, *aligned = NULL;
 	int delta = 0, units = SLOB_UNITS(size);
 
@@ -254,8 +262,11 @@ static void *slob_page_alloc(struct page
 			}
 
 			sp->units -= units;
-			if (!sp->units)
+			if (!sp->units) {
 				clear_slob_page_free(sp);
+				/* Signal that page was removed from list. */
+				*spp = NULL;
+			}
 			return cur;
 		}
 		if (slob_last(cur))
@@ -268,7 +279,7 @@ static void *slob_page_alloc(struct page
  */
 static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 {
-	struct page *sp, *prev, *next;
+	struct page *sp;
 	struct list_head *slob_list;
 	slob_t *b = NULL;
 	unsigned long flags;
@@ -283,6 +294,7 @@ static void *slob_alloc(size_t size, gfp
 	spin_lock_irqsave(&slob_lock, flags);
 	/* Iterate through each partially free page, try to find room */
 	list_for_each_entry(sp, slob_list, slab_list) {
+		struct page **spp = &sp;
 #ifdef CONFIG_NUMA
 		/*
 		 * If there's a node specification, search for a partial
@@ -295,27 +307,25 @@ static void *slob_alloc(size_t size, gfp
 		if (sp->units < SLOB_UNITS(size))
 			continue;
 
-		/*
-		 * Cache previous entry because slob_page_alloc() may
-		 * remove sp from slob_list.
-		 */
-		prev = list_prev_entry(sp, slab_list);
-
 		/* Attempt to alloc */
-		b = slob_page_alloc(sp, size, align);
+		b = slob_page_alloc(spp, size, align);
 		if (!b)
 			continue;
 
-		next = list_next_entry(prev, slab_list); /* This may or may not be sp */
-
 		/*
-		 * Improve fragment distribution and reduce our average
-		 * search time by starting our next search here. (see
-		 * Knuth vol 1, sec 2.5, pg 449)
+		 * If slob_page_alloc() removed sp from the list then we
+		 * cannot call list functions on sp.  Just bail, don't
+		 * worry about the optimisation below.
 		 */
-		if (!list_is_first(&next->slab_list, slob_list))
-			list_rotate_to_front(&next->slab_list, slob_list);
-
+		if (*spp) {
+			/*
+			 * Improve fragment distribution and reduce our average
+			 * search time by starting our next search here. (see
+			 * Knuth vol 1, sec 2.5, pg 449)
+			 */
+			if (!list_is_first(&sp->slab_list, slob_list))
+				list_rotate_to_front(&sp->slab_list, slob_list);
+		}
 		break;
 	}
 	spin_unlock_irqrestore(&slob_lock, flags);
@@ -334,7 +344,7 @@ static void *slob_alloc(size_t size, gfp
 		INIT_LIST_HEAD(&sp->slab_list);
 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
 		set_slob_page_free(sp, slob_list);
-		b = slob_page_alloc(sp, size, align);
+		b = slob_page_alloc(&sp, size, align);
 		BUG_ON(!b);
 		spin_unlock_irqrestore(&slob_lock, flags);
 	}
_

Patches currently in -mm which might be from tobin@xxxxxxxxxx are

list-add-function-list_rotate_to_front.patch
slob-respect-list_head-abstraction-layer.patch
slob-use-slab_list-instead-of-lru.patch
slob-only-use-list-functions-when-safe-to-do-so.patch
slub-add-comments-to-endif-pre-processor-macros.patch
slub-use-slab_list-instead-of-lru.patch
slab-use-slab_list-instead-of-lru.patch
mm-remove-stale-comment-from-page-struct.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux