Re: [PATCH v3] mm/min_free_kbytes: modify min_free_kbytes calculation rules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2023/6/21 18:00, Kefeng Wang 写道:
Hi Liuq,

On 2023/6/21 17:20, liuq wrote:
The current calculation of min_free_kbytes only uses ZONE_DMA and
ZONE_NORMAL pages,but the ZONE_MOVABLE zone->_watermark[WMARK_MIN]
will also divide part of min_free_kbytes.This will cause the min
watermark of ZONE_NORMAL to be too small in the presence of ZONE_MOVEABLE.

__GFP_HIGH and PF_MEMALLOC allocations usually don't need movable
zone pages, so just like ZONE_HIGHMEM, cap pages_min to a small
value in __setup_per_zone_wmarks.

On my testing machine with 16GB of memory (transparent hugepage is
turned off by default, and movablecore=12G is configured)
The following is a comparative test data of watermark_min

        no patch    add patch
ZONE_DMA    1        8
ZONE_DMA32    151        709
ZONE_NORMAL    233        1113
ZONE_MOVABLE    1434        128
min_free_kbytes    7288        7326


We see this issue and do the same change[1], and we add a per zone watermark configuration too, but both of them is not accepted,
let's add Mel and wupeng to see more comments.

[1]https://lore.kernel.org/linux-mm/20220905032858.1462927-1-mawupeng1@xxxxxxxxxx/

Hi wupeng,

Thank you for your reply. Actually, in my version 1, in the presence of ZONE_MOVABLE,

the calculation of min_free_kbytes can be improved when the ZONE_MOVABLE pages were added to the lowmem_kbytes.

However, whether the value of watermark_min should be maintained at high level is unclear.

https://lore.kernel.org/all/20230609032552.218010-1-liuq131@xxxxxxxxxxxxxxx/#r


Best Regards

liuq

Signed-off-by: liuq <liuq131@xxxxxxxxxxxxxxx>
---
  mm/page_alloc.c | 12 ++++++------
  1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 47421bedc12b..590ed8725e09 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6362,9 +6362,9 @@ static void __setup_per_zone_wmarks(void)
      struct zone *zone;
      unsigned long flags;
  -    /* Calculate total number of !ZONE_HIGHMEM pages */
+    /* Calculate total number of !ZONE_HIGHMEM and !ZONE_MOVABLE pages */
      for_each_zone(zone) {
-        if (!is_highmem(zone))
+        if (!is_highmem(zone) && zone_idx(zone) != ZONE_MOVABLE)
              lowmem_pages += zone_managed_pages(zone);
      }
  @@ -6374,15 +6374,15 @@ static void __setup_per_zone_wmarks(void)
          spin_lock_irqsave(&zone->lock, flags);
          tmp = (u64)pages_min * zone_managed_pages(zone);
          do_div(tmp, lowmem_pages);
-        if (is_highmem(zone)) {
+        if (is_highmem(zone) || zone_idx(zone) == ZONE_MOVABLE) {
              /*
               * __GFP_HIGH and PF_MEMALLOC allocations usually don't
-             * need highmem pages, so cap pages_min to a small
-             * value here.
+             * need highmem and movable zones pages, so cap pages_min
+             * to a small  value here.
               *
               * The WMARK_HIGH-WMARK_LOW and (WMARK_LOW-WMARK_MIN)
               * deltas control async page reclaim, and so should
-             * not be capped for highmem.
+             * not be capped for highmem and movable zones.
               */
              unsigned long min_pages;
 

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux