[PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One of my plotform that use Joonsoo's CMA patch [1] has a device that
will alloc a lot of MIGRATE_UNMOVABLE memory when it works in a zone.
When this device works, the memory status of this zone is not OK.  Most of
CMA is not allocated but most normal memory is allocated.
This issue is because in __rmqueue:
	if (IS_ENABLED(CONFIG_CMA) &&
		migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages)
		page = __rmqueue_cma(zone, order);
Just allocated MIGRATE_MOVABLE will be record in nr_try_movable in function
__rmqueue_cma but not the others.  This device allocated a lot of
MIGRATE_UNMOVABLE memory affect the behavior of this zone memory allocation.

This patch change __rmqueue to let nr_try_movable record all the memory
allocation of normal memory.

[1] https://lkml.org/lkml/2014/5/28/64

Signed-off-by: Hui Zhu <zhuhui@xxxxxxxxxx>
Signed-off-by: Weixing Liu <liuweixing@xxxxxxxxxx>
---
 mm/page_alloc.c | 41 ++++++++++++++++++++---------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a8d9f03..a5bbc38 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1301,28 +1301,23 @@ static struct page *__rmqueue_cma(struct zone *zone, unsigned int order)
 {
 	struct page *page;
 
-	if (zone->nr_try_movable > 0)
-		goto alloc_movable;
+	if (zone->nr_try_cma <= 0) {
+		/* Reset counter */
+		zone->nr_try_movable = zone->max_try_movable;
+		zone->nr_try_cma = zone->max_try_cma;
 
-	if (zone->nr_try_cma > 0) {
-		/* Okay. Now, we can try to allocate the page from cma region */
-		zone->nr_try_cma -= 1 << order;
-		page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
-
-		/* CMA pages can vanish through CMA allocation */
-		if (unlikely(!page && order == 0))
-			zone->nr_try_cma = 0;
-
-		return page;
+		return NULL;
 	}
 
-	/* Reset counter */
-	zone->nr_try_movable = zone->max_try_movable;
-	zone->nr_try_cma = zone->max_try_cma;
+	/* Okay. Now, we can try to allocate the page from cma region */
+	zone->nr_try_cma -= 1 << order;
+	page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
 
-alloc_movable:
-	zone->nr_try_movable -= 1 << order;
-	return NULL;
+	/* CMA pages can vanish through CMA allocation */
+	if (unlikely(!page && order == 0))
+		zone->nr_try_cma = 0;
+
+	return page;
 }
 #endif
 
@@ -1335,9 +1330,13 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
 {
 	struct page *page = NULL;
 
-	if (IS_ENABLED(CONFIG_CMA) &&
-		migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages)
-		page = __rmqueue_cma(zone, order);
+	if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) {
+		if (migratetype == MIGRATE_MOVABLE
+		    && zone->nr_try_movable <= 0)
+			page = __rmqueue_cma(zone, order);
+		else
+			zone->nr_try_movable -= 1 << order;
+	}
 
 retry_reserve:
 	if (!page)
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]