[PATCH] mm, page_alloc: simplify hot/cold page handling in rmqueue_bulk()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The rmqueue_bulk() function fills an empty pcplist with pages from the free
list. It tries to preserve increasing order by pfn to the caller, because it
leads to better performance with some I/O controllers, as explained in
e084b2d95e48 ("page-allocator: preserve PFN ordering when __GFP_COLD is set").
For callers requesting cold pages, which are obtained from the tail of
pcplists, it means the pcplist has to be filled in reverse order from the free
lists (the hot/cold property only applies when pages are recycled on the
pcplists, not when refilled from free lists).

The related comment in rmqueue_bulk() wasn't clear to me without reading the
log of the commit mentioned above, so try to clarify it.

The code for filling the pcplists in order determined by the cold flag also
seems unnecessarily hard to follow. It's sufficient to either use list_add()
or list_add_tail(), but the current code also updates the list head pointer
in each step to the last added page, which then counterintuitively requires
to switch the usage of list_add() and list_add_tail() to achieve the desired
order, with no apparent benefit. This patch simplifies the code.

Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
---
 mm/page_alloc.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6191c9a04789..4b296fc8e599 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2329,19 +2329,18 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 			continue;
 
 		/*
-		 * Split buddy pages returned by expand() are received here
-		 * in physical page order. The page is added to the callers and
-		 * list and the list head then moves forward. From the callers
-		 * perspective, the linked list is ordered by page number in
-		 * some conditions. This is useful for IO devices that can
-		 * merge IO requests if the physical pages are ordered
+		 * Split buddy pages returned by expand() are received here in
+		 * physical page order. The page is added to the caller's list.
+		 * From the callers perspective, make sure the pages will be
+		 * consumed in the order as returned by expand(), regardless of
+		 * cold being true or false. This is useful for IO devices that
+		 * can merge IO requests if the physical pages are ordered
 		 * properly.
 		 */
 		if (likely(!cold))
-			list_add(&page->lru, list);
-		else
 			list_add_tail(&page->lru, list);
-		list = &page->lru;
+		else
+			list_add(&page->lru, list);
 		alloced++;
 		if (is_migrate_cma(get_pcppage_migratetype(page)))
 			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
-- 
2.14.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux