+ slob-use-slab_list-instead-of-lru.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/slob.c: use slab_list instead of lru
has been added to the -mm tree.  Its filename is
     slob-use-slab_list-instead-of-lru.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/slob-use-slab_list-instead-of-lru.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/slob-use-slab_list-instead-of-lru.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: "Tobin C. Harding" <tobin@xxxxxxxxxx>
Subject: mm/slob.c: use slab_list instead of lru

Currently we use the page->lru list for maintaining lists of slabs.  We
have a list_head in the page structure (slab_list) that can be used for
this purpose.  Doing so makes the code cleaner since we are not
overloading the lru list.

The slab_list is part of a union within the page struct (included here
stripped down):

	union {
		struct {	/* Page cache and anonymous pages */
			struct list_head lru;
			...
		};
		struct {
			dma_addr_t dma_addr;
		};
		struct {	/* slab, slob and slub */
			union {
				struct list_head slab_list;
				struct {	/* Partial pages */
					struct page *next;
					int pages;	/* Nr of pages left */
					int pobjects;	/* Approximate count */
				};
			};
		...

Here we see that slab_list and lru are the same bits.  We can verify that
this change is safe to do by examining the object file produced from
slob.c before and after this patch is applied.

Steps taken to verify:

 1. checkout current tip of Linus' tree

    commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")

 2. configure and build (select SLOB allocator)

    CONFIG_SLOB=y
    CONFIG_SLAB_MERGE_DEFAULT=y

 3. dissasemble object file `objdump -dr mm/slub.o > before.s
 4. apply patch
 5. build
 6. dissasemble object file `objdump -dr mm/slub.o > after.s
 7. diff before.s after.s

Use slab_list list_head instead of the lru list_head for maintaining
lists of slabs.

Link: http://lkml.kernel.org/r/20190318000234.22049-4-tobin@xxxxxxxxxx
Signed-off-by: Tobin C. Harding <tobin@xxxxxxxxxx>
Reviewed-by: Roman Gushchin <guro@xxxxxx>
Cc: Christoph Lameter <cl@xxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: Pekka Enberg <penberg@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/slob.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

--- a/mm/slob.c~slob-use-slab_list-instead-of-lru
+++ a/mm/slob.c
@@ -112,13 +112,13 @@ static inline int slob_page_free(struct
 
 static void set_slob_page_free(struct page *sp, struct list_head *list)
 {
-	list_add(&sp->lru, list);
+	list_add(&sp->slab_list, list);
 	__SetPageSlobFree(sp);
 }
 
 static inline void clear_slob_page_free(struct page *sp)
 {
-	list_del(&sp->lru);
+	list_del(&sp->slab_list);
 	__ClearPageSlobFree(sp);
 }
 
@@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp
 
 	spin_lock_irqsave(&slob_lock, flags);
 	/* Iterate through each partially free page, try to find room */
-	list_for_each_entry(sp, slob_list, lru) {
+	list_for_each_entry(sp, slob_list, slab_list) {
 #ifdef CONFIG_NUMA
 		/*
 		 * If there's a node specification, search for a partial
@@ -299,22 +299,22 @@ static void *slob_alloc(size_t size, gfp
 		 * Cache previous entry because slob_page_alloc() may
 		 * remove sp from slob_list.
 		 */
-		prev = list_prev_entry(sp, lru);
+		prev = list_prev_entry(sp, slab_list);
 
 		/* Attempt to alloc */
 		b = slob_page_alloc(sp, size, align);
 		if (!b)
 			continue;
 
-		next = list_next_entry(prev, lru); /* This may or may not be sp */
+		next = list_next_entry(prev, slab_list); /* This may or may not be sp */
 
 		/*
 		 * Improve fragment distribution and reduce our average
 		 * search time by starting our next search here. (see
 		 * Knuth vol 1, sec 2.5, pg 449)
 		 */
-		if (!list_is_first(&next->lru, slob_list))
-			list_rotate_to_front(&next->lru, slob_list);
+		if (!list_is_first(&next->slab_list, slob_list))
+			list_rotate_to_front(&next->slab_list, slob_list);
 
 		break;
 	}
@@ -331,7 +331,7 @@ static void *slob_alloc(size_t size, gfp
 		spin_lock_irqsave(&slob_lock, flags);
 		sp->units = SLOB_UNITS(PAGE_SIZE);
 		sp->freelist = b;
-		INIT_LIST_HEAD(&sp->lru);
+		INIT_LIST_HEAD(&sp->slab_list);
 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
 		set_slob_page_free(sp, slob_list);
 		b = slob_page_alloc(sp, size, align);
_

Patches currently in -mm which might be from tobin@xxxxxxxxxx are

list-add-function-list_rotate_to_front.patch
slob-respect-list_head-abstraction-layer.patch
slob-use-slab_list-instead-of-lru.patch
slub-add-comments-to-endif-pre-processor-macros.patch
slub-use-slab_list-instead-of-lru.patch
slab-use-slab_list-instead-of-lru.patch
mm-remove-stale-comment-from-page-struct.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux