Re: [RFC PATCH 0/5] Accelerate page migration with batching and multi threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 03, 2025 at 12:24:14PM -0500, Zi Yan wrote:
> Hi all,
> 
> This patchset accelerates page migration by batching folio copy operations and
> using multiple CPU threads and is based on Shivank's Enhancements to Page
> Migration with Batch Offloading via DMA patchset[1] and my original accelerate
> page migration patchset[2]. It is on top of mm-everything-2025-01-03-05-59.
> The last patch is for testing purpose and should not be considered.
> 

This is well timed as I've been testing a batch-migration variant of
migrate_misplaced_folio for my pagecache promotion work (attached).

I will add this to my pagecache branch and give it a test at some point.

Quick question: is the multi-threaded movement supported in the context
of task_work?  i.e. in which context is the multi-threaded path
safe/unsafe? (inline in a syscall, async only, etc).

~Gregory

---

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 9438cc7c2aeb..17baf63964c0 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -146,6 +146,9 @@ int migrate_misplaced_folio_prepare(struct folio *folio,
                struct vm_area_struct *vma, int node);
 int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
                           int node);
+int migrate_misplaced_folio_batch(struct list_head *foliolist,
+                                 struct vm_area_struct *vma,
+                                 int node);
 #else
 static inline int migrate_misplaced_folio_prepare(struct folio *folio,
                struct vm_area_struct *vma, int node)
@@ -157,6 +160,12 @@ static inline int migrate_misplaced_folio(struct folio *folio,
 {
        return -EAGAIN; /* can't migrate now */
 }
+int migrate_misplaced_folio_batch(struct list_head *foliolist,
+                                 struct vm_area_struct *vma,
+                                 int node)
+{
+       return -EAGAIN; /* can't migrate now */
+}
 #endif /* CONFIG_NUMA_BALANCING */

 #ifdef CONFIG_MIGRATION
diff --git a/mm/migrate.c b/mm/migrate.c
index 459f396f7bc1..454fd93c4cc7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2608,5 +2608,27 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
        BUG_ON(!list_empty(&migratepages));
        return nr_remaining ? -EAGAIN : 0;
 }
+
+int migrate_misplaced_folio_batch(struct list_head *folio_list,
+                                 struct vm_area_struct *vma,
+                                 int node)
+{
+       pg_data_t *pgdat = NODE_DATA(node);
+       unsigned int nr_succeeded;
+       int nr_remaining;
+
+       nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio,
+                                    NULL, node, MIGRATE_ASYNC,
+                                    MR_NUMA_MISPLACED, &nr_succeeded);
+       if (nr_remaining)
+               putback_movable_pages(folio_list);
+
+       if (nr_succeeded) {
+               count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+               mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded);
+       }
+       BUG_ON(!list_empty(folio_list));
+       return nr_remaining ? -EAGAIN : 0;
+}
 #endif /* CONFIG_NUMA_BALANCING */
 #endif /* CONFIG_NUMA */




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux