The following commit has been merged into the sched/core branch of tip: Commit-ID: 983be0628c061989b6cc175d2f5e429b40699fbb Gitweb: https://git.kernel.org/tip/983be0628c061989b6cc175d2f5e429b40699fbb Author: Ingo Molnar <mingo@xxxxxxxxxx> AuthorDate: Fri, 08 Mar 2024 12:18:09 +01:00 Committer: Ingo Molnar <mingo@xxxxxxxxxx> CommitterDate: Tue, 12 Mar 2024 11:59:59 +01:00 sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Standardize scheduler load-balancing function names on the sched_balance_() prefix. Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> Reviewed-by: Shrikanth Hegde <sshegde@xxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20240308111819.1101550-4-mingo@xxxxxxxxxx --- Documentation/scheduler/sched-domains.rst | 2 +- Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +- kernel/sched/core.c | 2 +- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index 541d6c6..c7ea05f 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -31,7 +31,7 @@ is treated as one entity. The load of a group is defined as the sum of the load of each of its member CPUs, and only when the load of a group becomes out of balance are tasks moved between groups. -In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU +In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU through sched_tick(). It raises a softirq after the next regularly scheduled rebalancing event for the current runqueue has arrived. The actual load balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index fa0c0bc..1a8587a 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -34,7 +34,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这� 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。 -在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick() +在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick() 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 (SCHED_SOFTIRQ)。 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 71b7a08..929fce6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5700,7 +5700,7 @@ void sched_tick(void) #ifdef CONFIG_SMP rq->idle_balance = idle_cpu(cpu); - trigger_load_balance(rq); + sched_balance_trigger(rq); #endif } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 953f39d..e377b67 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12438,7 +12438,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) /* * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing. */ -void trigger_load_balance(struct rq *rq) +void sched_balance_trigger(struct rq *rq) { /* * Don't need to rebalance while attached to NULL domain or diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d224267..5b0ddb0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2397,7 +2397,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq); extern void update_group_capacity(struct sched_domain *sd, int cpu); -extern void trigger_load_balance(struct rq *rq); +extern void sched_balance_trigger(struct rq *rq); extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);