Re: [PATCH v6 4/8] padata: dispatch works on different nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/2/28 05:24, Daniel Jordan wrote:
Hi,

On Thu, Feb 22, 2024 at 10:04:17PM +0800, Gang Li wrote:
When a group of tasks that access different nodes are scheduled on the
same node, they may encounter bandwidth bottlenecks and access latency.

Thus, numa_aware flag is introduced here, allowing tasks to be
distributed across different nodes to fully utilize the advantage of
multi-node systems.

Signed-off-by: Gang Li <ligang.bdlg@xxxxxxxxxxxxx>
Tested-by: David Rientjes <rientjes@xxxxxxxxxx>
Reviewed-by: Muchun Song <muchun.song@xxxxxxxxx>
Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
---
  include/linux/padata.h |  2 ++
  kernel/padata.c        | 14 ++++++++++++--
  mm/mm_init.c           |  1 +
  3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/linux/padata.h b/include/linux/padata.h
index 495b16b6b4d72..8f418711351bc 100644
--- a/include/linux/padata.h
+++ b/include/linux/padata.h
@@ -137,6 +137,7 @@ struct padata_shell {
   *             appropriate for one worker thread to do at once.
   * @max_threads: Max threads to use for the job, actual number may be less
   *               depending on task size and minimum chunk size.
+ * @numa_aware: Distribute jobs to different nodes with CPU in a round robin fashion.

numa_interleave seems more descriptive.

   */
  struct padata_mt_job {
  	void (*thread_fn)(unsigned long start, unsigned long end, void *arg);
@@ -146,6 +147,7 @@ struct padata_mt_job {
  	unsigned long		align;
  	unsigned long		min_chunk;
  	int			max_threads;
+	bool			numa_aware;
  };
/**
diff --git a/kernel/padata.c b/kernel/padata.c
index 179fb1518070c..e3f639ff16707 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -485,7 +485,8 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
  	struct padata_work my_work, *pw;
  	struct padata_mt_job_state ps;
  	LIST_HEAD(works);
-	int nworks;
+	int nworks, nid;
+	static atomic_t last_used_nid __initdata;

nit, move last_used_nid up so it's below load_balance_factor to keep
that nice tree shape

if (job->size == 0)
  		return;
@@ -517,7 +518,16 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
  	ps.chunk_size = roundup(ps.chunk_size, job->align);
list_for_each_entry(pw, &works, pw_list)
-		queue_work(system_unbound_wq, &pw->pw_work);
+		if (job->numa_aware) {
+			int old_node = atomic_read(&last_used_nid);
+
+			do {
+				nid = next_node_in(old_node, node_states[N_CPU]);
+			} while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid));

There aren't concurrent NUMA-aware _do_multithreaded calls now, so an
atomic per work seems like an unnecessary expense for guarding against

Hi Daniel,

Yes, this is not necessary. But I think this operation is infrequent, so
the burden shouldn't be too great?

possible uneven thread distribution in the future.  Non-atomic access
instead?


+			queue_work_node(nid, system_unbound_wq, &pw->pw_work);
+		} else {
+			queue_work(system_unbound_wq, &pw->pw_work);
+		}
/* Use the current thread, which saves starting a workqueue worker. */
  	padata_work_init(&my_work, padata_mt_helper, &ps, PADATA_WORK_ONSTACK);
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 2c19f5515e36c..549e76af8f82a 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2231,6 +2231,7 @@ static int __init deferred_init_memmap(void *data)
  			.align       = PAGES_PER_SECTION,
  			.min_chunk   = PAGES_PER_SECTION,
  			.max_threads = max_threads,
+			.numa_aware  = false,
  		};
padata_do_multithreaded(&job);
--
2.20.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux