[PATCHv2] mm: optimization on page allocation when CMA enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>

Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB
(managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use
CMA since C which actually has caused U&R lower than WMARK_LOW (this should be
deemed as against current memory policy, that is, U&R should either stay around
WATERMARK_LOW when no allocation or do reclaim via enter slowpath)

free_cma/free_pages(MB)      A(12/30)     B(12/25)     C(12/20)
fixed 1/2 ratio                 N             N           Y
this commit                     Y             Y           Y

Suggested-by: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
---
v2: do proportion check when zone_watermark_ok, update commit message
---
---
 mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++----
 1 file changed, 32 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0745aed..d0baeab 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 
 }
 
+#ifdef CONFIG_CMA
+static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)
+{
+	unsigned long cma_proportion = 0;
+	unsigned long cma_free_proportion = 0;
+	unsigned long watermark = 0;
+	long count = 0;
+	bool cma_first = false;
+
+	watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
+	/*check if GFP_MOVABLE pass previous watermark check via the help of CMA*/
+	if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA)))
+		/* WMARK_LOW failed lead to using cma first, this helps U&R stay
+		 * around low when being drained by GFP_MOVABLE
+		 */
+		cma_first = true;
+	else {
+		/*check proportion when zone_watermark_ok*/
+		count = atomic_long_read(&zone->managed_pages);
+		cma_proportion = zone->cma_pages * 100 / count;
+		cma_free_proportion = zone_page_state(zone, NR_FREE_CMA_PAGES) * 100
+			/  zone_page_state(zone, NR_FREE_PAGES);
+		cma_first = (cma_free_proportion >= cma_proportion * 2
+				|| cma_free_proportion >= 50);
+	}
+	return cma_first;
+}
+#endif
 /*
  * Do the hard work of removing an element from the buddy allocator.
  * Call me with the zone->lock already held.
@@ -3087,10 +3115,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 		 * allocating from CMA when over half of the zone's free memory
 		 * is in the CMA area.
 		 */
-		if (alloc_flags & ALLOC_CMA &&
-		    zone_page_state(zone, NR_FREE_CMA_PAGES) >
-		    zone_page_state(zone, NR_FREE_PAGES) / 2) {
-			page = __rmqueue_cma_fallback(zone, order);
+		if (migratetype == MIGRATE_MOVABLE) {
+			bool cma_first = __if_use_cma_first(zone, order, alloc_flags);
+
+			page = cma_first ? __rmqueue_cma_fallback(zone, order) : NULL;
 			if (page)
 				return page;
 		}
-- 
1.9.1




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux