在 2024/6/17 下午7:55, Barry Song 写道:
On Mon, Jun 17, 2024 at 7:36 PM Baolin Wang
<baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
On 2024/6/17 18:43, Barry Song wrote:
On Thu, Jun 6, 2024 at 3:07 PM Baolin Wang
<baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
On 2024/6/4 20:36, yangge1116 wrote:
在 2024/6/4 下午8:01, Baolin Wang 写道:
Cc Johannes, Zi and Vlastimil.
On 2024/6/4 17:14, yangge1116@xxxxxxx wrote:
From: yangge <yangge1116@xxxxxxx>
Since commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for
THP-sized allocations") no longer differentiates the migration type
of pages in THP-sized PCP list, it's possible to get a CMA page from
the list, in some cases, it's not acceptable, for example, allocating
a non-CMA page with PF_MEMALLOC_PIN flag returns a CMA page.
The patch forbids allocating non-CMA THP-sized page from THP-sized
PCP list to avoid the issue above.
Fixes: 5d0a661d808f ("mm/page_alloc: use only one PCP list for
THP-sized allocations")
Signed-off-by: yangge <yangge1116@xxxxxxx>
---
mm/page_alloc.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2e22ce5..0bdf471 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2987,10 +2987,20 @@ struct page *rmqueue(struct zone
*preferred_zone,
WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
if (likely(pcp_allowed_order(order))) {
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
+ order != HPAGE_PMD_ORDER) {
Seems you will also miss the non-CMA THP from the PCP, so I wonder if
we can add a migratetype comparison in __rmqueue_pcplist(), and if
it's not suitable, then fallback to buddy?
Yes, we may miss some non-CMA THPs in the PCP. But, if add a migratetype
comparison in __rmqueue_pcplist(), we may need to compare many times
because of pcp batch.
I mean we can only compare once, focusing on CMA pages.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3734fe7e67c0..960a3b5744d8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2973,6 +2973,11 @@ struct page *__rmqueue_pcplist(struct zone *zone,
unsigned int order,
}
page = list_first_entry(list, struct page, pcp_list);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (order == HPAGE_PMD_ORDER &&
!is_migrate_movable(migratetype) &&
+ is_migrate_cma(get_pageblock_migratetype(page)))
+ return NULL;
+#endif
This doesn't seem ideal either. It's possible that the PCP still has many
non-CMA folios, but due to bad luck, the first entry is "always" CMA.
In this case,
allocations with is_migrate_movable(migratetype) == false will always lose the
chance to use the PCP. It also appears to incur a PCP spin lock/unlock.
Yes, just some ideas to to mitigate the issue...
I don't see an ideal solution unless we bring back the CMA PCP :-)
Tend to agree, and adding a CMA PCP seems the overhead can be acceptable?
yes. probably. Hi Ge,
Could we printk the size before and after adding 1 to NR_PCP_LISTS?
Does it increase one cacheline?
struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
int high_min; /* min high watermark */
int high_max; /* max high watermark */
int batch; /* chunk size for buddy add/remove */
u8 flags; /* protected by pcp->lock */
u8 alloc_factor; /* batch scaling factor during allocate */
#ifdef CONFIG_NUMA
u8 expire; /* When 0, remote pagesets are drained */
#endif
short free_count; /* consecutive free count */
/* Lists of pages, one per migrate type stored on the pcp-lists */
struct list_head lists[NR_PCP_LISTS];
} ____cacheline_aligned_in_smp;
OK.
The size of struct per_cpu_pages is 256 bytes in current code containing
commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for THP-sized
allocations").
crash> struct per_cpu_pages
struct per_cpu_pages {
spinlock_t lock;
int count;
int high;
int high_min;
int high_max;
int batch;
u8 flags;
u8 alloc_factor;
u8 expire;
short free_count;
struct list_head lists[13];
}
SIZE: 256
After revert commit 5d0a661d808f ("mm/page_alloc: use only one PCP list
for THP-sized allocations"), the size of struct per_cpu_pages is 272 bytes.
crash> struct per_cpu_pages
struct per_cpu_pages {
spinlock_t lock;
int count;
int high;
int high_min;
int high_max;
int batch;
u8 flags;
u8 alloc_factor;
u8 expire;
short free_count;
struct list_head lists[15];
}
SIZE: 272
Seems commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for
THP-sized allocations") decrease one cacheline.