[PATCH 1/1] sched/fair: improve yield_to vs fairness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After some debugging in situations where a smaller sched_latency_ns and
smaller sched_migration_cost settings helped for KVM host, I was able to
come up with a reduced testcase.
This testcase has 2 vcpus working on a shared memory location and
waiting for mem % 2 == cpu number to then do an add on the shared
memory.
To start simple I pinned all vcpus to one host CPU. Without the
yield_to in KVM the testcase was horribly slow. This is expected as each
vcpu will spin a whole time slice. With the yield_to from KVM things are
much better, but I was still seeing yields being ignored.
In the end pick_next_entity decided to keep the current process running
due to fairness reasons.  On this path we really know that there is no
point in continuing current. So let us make things a bit unfairer to
current.
This makes the reduced testcase noticeable faster. It improved a more
realistic test case (many guests on some host CPUs with overcomitment)
even more.
In the end this is similar to the old compat_sched_yield approach with
an important difference:
Instead of doing it for all yields we now only do it for yield_to
a place where we really know that current it waiting for the target.

What are alternative implementations for this patch
- do the same as the old compat_sched_yield:
  current->vruntime = rightmost->vruntime+1
- provide a new tunable sched_ns_yield_penalty: how much vruntime to add
  (could be per architecture)
- also fiddle with the vruntime of the target
  e.g. subtract from the target what we add to the source

Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
---
 kernel/sched/fair.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 23663318fb81..4f661a9ed3ba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7337,6 +7337,7 @@ static void yield_task_fair(struct rq *rq)
 static bool yield_to_task_fair(struct rq *rq, struct task_struct *p)
 {
 	struct sched_entity *se = &p->se;
+	struct sched_entity *curr = &rq->curr->se;
 
 	/* throttled hierarchies are not runnable */
 	if (!se->on_rq || throttled_hierarchy(cfs_rq_of(se)))
@@ -7347,6 +7348,16 @@ static bool yield_to_task_fair(struct rq *rq, struct task_struct *p)
 
 	yield_task_fair(rq);
 
+	/*
+	 * This path is special and only called from KVM. In contrast to yield,
+	 * in yield_to we really know that current is spinning and we know
+	 * (s390) or have good heuristics whom are we waiting for. There is
+	 * absolutely no point in continuing the current task, even if this
+	 * means to become unfairer. Let us give the current process some
+	 * "fake" penalty.
+	 */
+	curr->vruntime += sched_slice(cfs_rq_of(curr), curr);
+
 	return true;
 }
 
-- 
2.31.1




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux