Re: [PATCH 1/1] sched/fair: improve yield_to vs fairness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 26.07.21 21:32, Mel Gorman wrote:
On Mon, Jul 26, 2021 at 08:41:15PM +0200, Christian Borntraeger wrote:
Potentially. The patch was a bit off because while it noticed that skip
was not being obeyed, the fix was clumsy and isolated. The current flow is

1. pick se == left as the candidate
2. try pick a different se if the "ideal" candidate is a skip candidate
3. Ignore the se update if next or last are set

Step 3 looks off because it ignores skip if next or last buddies are set
and I don't think that was intended. Can you try this?

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 44c452072a1b..d56f7772a607 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4522,12 +4522,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
   			se = second;
   	}
-	if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) < 1) {
+	if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, se) < 1) {
   		/*
   		 * Someone really wants this to run. If it's not unfair, run it.
   		 */
   		se = cfs_rq->next;
-	} else if (cfs_rq->last && wakeup_preempt_entity(cfs_rq->last, left) < 1) {
+	} else if (cfs_rq->last && wakeup_preempt_entity(cfs_rq->last, se) < 1) {
   		/*
   		 * Prefer last buddy, try to return the CPU to a preempted task.
   		 */


This one alone does not seem to make a difference. Neither in ignored yield, nor
in performance.

Your first patch does really help in terms of ignored yields when
all threads are pinned to one host CPU.

Ok, that tells us something. It implies, but does not prove, that the
block above that handles skip is failing either the entity_before()
test or the wakeup_preempt_entity() test. To what degree that should be
relaxed when cfs_rq->next is !NULL is harder to determine.

After that we do have no ignored yield
it seems. But it does not affect the performance of my testcase.

Ok, this is the first patch. The second patch is not improving ignored
yields at all so the above paragraph still applies. It would be nice
if you could instrument with trace_printk when cfs->rq_next is valid
whether it's the entity_before() check that is preventing the skip or
wakeup_preempt_entity. Would that be possible?

I will try that.

I still think the second patch is right independent of it helping your
test case because it makes no sense to me at all that the task after the
skip candidate is ignored if there is a next or last buddy.

I agree.  This patch makes sense to me as a bug fix.
And I think also the first patch makes sense on its own.

I did some more experiments and I removed the wakeup_preempt_entity checks in
pick_next_entity - assuming that this will result in source always being stopped
and target always being picked. But still, no performance difference.
As soon as I play with vruntime I do see a difference (but only without the cpu cgroup
controller). I will try to better understand the scheduler logic and do some more
testing. If you have anything that I should test, let me know.


The fact that vruntime tricks only makes a difference when cgroups are
involved is interesting. Can you describe roughly what how the cgroup
is configured?

Its the other way around. My vruntime patch ONLY helps WITHOUT the cpu cgroup controller.
In other words this example on a 16CPU host (resulting in 4x overcommitment)
time ( for ((d=0; d<16; d++)) ; do cgexec -g cpu:test$d qemu-system-s390x -enable-kvm -kernel /root/REPOS/kvm-unit-tests/s390x/diag9c.elf  -smp 4 -nographic -nodefaults -device sclpconsole,chardev=c2 -chardev file,path=/tmp/log$d.log,id=c2  & done; wait)
does NOT benefit from the vruntime patch, but when I remove the "cgexec -g cpu:test$d" it does:
time ( for ((d=0; d<16; d++)) ; do qemu-system-s390x -enable-kvm -kernel /root/REPOS/kvm-unit-tests/s390x/diag9c.elf  -smp 4 -nographic -nodefaults -device sclpconsole,chardev=c2 -chardev file,path=/tmp/log$d.log,id=c2  & done; wait)
Similarly, does your config have CONFIG_SCHED_AUTOGROUP
or CONFIG_FAIR_GROUP_SCHED set? I assume FAIR_GROUP_SCHED must be and

Yes, both are set.
I wonder if the impact of your patch is dropping groups of tasks in
priority as opposed to individual tasks. I'm not that familiar with how
groups are handled in terms of how they are prioritised unfortunately.

I'm still hesitant to consider the vruntime hammer in case it causes
fairness problems when vruntime is no longer reflecting time spent on
the CPU.

I understand your concerns. What about subtracting the same amount of
vruntime from the target as we add on the yielder? Would that result in
quicker rebalancing while still keeping everything in order?
The reason why I am asking is that initially we
realized that setting some tunables lower, e.g.
kernel.sched_latency_ns = 2000000
kernel.sched_migration_cost_ns = 100000
makes things faster in a similar fashion. And that also works with cgroups.
But ideally we find a solution without changing tuneables.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux