[tip: sched/core] sched/rt: Re-instate old behavior in select_task_rq_rt()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b28bc1e002c23ff8a4999c4a2fb1d4d412bc6f5e
Gitweb:        https://git.kernel.org/tip/b28bc1e002c23ff8a4999c4a2fb1d4d412bc6f5e
Author:        Qais Yousef <qais.yousef@xxxxxxx>
AuthorDate:    Mon, 02 Mar 2020 13:27:17 
Committer:     Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Fri, 06 Mar 2020 12:57:27 +01:00

sched/rt: Re-instate old behavior in select_task_rq_rt()

When RT Capacity Aware support was added, the logic in select_task_rq_rt
was modified to force a search for a fitting CPU if the task currently
doesn't run on one.

But if the search failed, and the search was only triggered to fulfill
the fitness request; we could end up selecting a new CPU unnecessarily.

Fix this and re-instate the original behavior by ensuring we bail out
in that case.

This behavior change only affected asymmetric systems that are using
util_clamp to implement capacity aware. None asymmetric systems weren't
affected.

LINK: https://lore.kernel.org/lkml/20200218041620.GD28029@xxxxxxxxxxxxxx/
Reported-by: Pavan Kondeti <pkondeti@xxxxxxxxxxxxxx>
Signed-off-by: Qais Yousef <qais.yousef@xxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Fixes: 804d402fb6f6 ("sched/rt: Make RT capacity-aware")
Link: https://lkml.kernel.org/r/20200302132721.8353-3-qais.yousef@xxxxxxx
---
 kernel/sched/rt.c |  9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 55a4a50..f0071fa 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1475,6 +1475,13 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
 		int target = find_lowest_rq(p);
 
 		/*
+		 * Bail out if we were forcing a migration to find a better
+		 * fitting CPU but our search failed.
+		 */
+		if (!test && target != -1 && !rt_task_fits_capacity(p, target))
+			goto out_unlock;
+
+		/*
 		 * Don't bother moving it if the destination CPU is
 		 * not running a lower priority task.
 		 */
@@ -1482,6 +1489,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
 		    p->prio < cpu_rq(target)->rt.highest_prio.curr)
 			cpu = target;
 	}
+
+out_unlock:
 	rcu_read_unlock();
 
 out:



[Index of Archives]     [Linux Stable Commits]     [Linux Stable Kernel]     [Linux Kernel]     [Linux USB Devel]     [Linux Video &Media]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux