Re: [POC][RFC][PATCH] sched: Extended Scheduler Time Slice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 25 Oct 2023 15:55:45 +0200
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

> On Wed, Oct 25, 2023 at 08:54:34AM -0400, Steven Rostedt wrote:
> 
> > I didn't want to overload that for something completely different. This is
> > not a "restartable sequence".  
> 
> Your hack is arguably worse. At least rseq already exists and most
> threads will already have it set up if you have a recent enough glibc.

I don't expect that file to be the final solution. I can look at the rseq
code, but I really hate to overload that. I'm thinking perhaps another
system call, or what the hell, add another ioctl like feature to prctl()!
Actually, prctl() may be the proper place for this.

> 
> > > So what if it doesn't ? Can we kill it for not playing nice ?  
> > 
> > No, it's no different than a system call running for a long time. You could  
> 
> Then why ask for it? What's the point. Also, did you define
> sched_yield() semantics for OTHER to something useful? Because if you
> didn't you just invoked UB :-) We could be setting your pets on fire.

Actually, it works with *any* system call. Not just sched_yield(). I just
used that as it was the best one to annotate "the kernel asked me to
schedule, I'm going to schedule". If you noticed, I did not modify
sched_yield() in the patch. The NEED_RESCHED_LAZY is still set, and without
the extend bit set, on return back to user space it will schedule.

> 
> > set this bit and leave it there for as long as you want, and it should not
> > affect anything.  
> 
> It would affect the worst case interference terms of the system at the
> very least.

If you are worried about that, it can easily be configurable to be turned
off. Seriously, I highly doubt that this would be even measurable as
interference. I could be wrong, I haven't tested that. It's something we
can look at, but until it's considered a problem it should not be a show
blocker.

> 
> > If you look at what Thomas's PREEMPT_AUTO.patch  
> 
> I know what it does, it also means your thing doesn't work the moment
> you set things up to have the old full-preempt semantics back. It
> doesn't work in the presence of RT/DL tasks, etc..

Note, I am looking at ways to make this work with full preempt semantics.
This is still a POC, there's a lot of room for improvements here. From my
understanding, the potential of Thomas's patch is to get rid of the
build time configurable semantics of NONE, VOLUNTARY and PREEMPT (only
PREEMPT_RT will be different).

> 
> More importantly, it doesn't work for RT/DL tasks, so having the bit set
> and not having OTHER policy is an error.

It would basically be a nop.

> 
> Do you want an interface that randomly doesn't work ?

An RT task doesn't get preempted by ticks, so how would in randomly not
work? We could allow RR tasks to get a bit more time if it has this bit set
too. Or maybe allow DL to get a little more if there's not another DL task
needing to run.

But for now, this is only for SCHED_OTHER, as this is not usually a problem
for RT/DL tasks. The extend bit is only a hint for the kernel, there's no
guarantees that it will be available or even if the kernel will honor it.
But because there's a lot of code out there that implements user space spin
locks, this could be a huge win for them when implemented, without changing
much.

Remember, RT and DL are about deterministic behavior, SCHED_OTHER is about
performance. This is a performance patch, not a deterministic one.

> 
> > We could possibly make it adjustable.   
> 
> Tunables are not a good thing.
> 
> > The reason I've been told over the last few decades of why people implement
> > 100% user space spin locks is because the overhead of going int the kernel
> > is way too high.  
> 
> Over the last few decades that has been a blatant falsehood. At some
> point (right before the whole meltdown trainwreck) amluto had syscall
> overhead down to less than 150 cycles.

Well, as far as I know, the testing that Postgresql has done has never seen
that.

> 
> Then of course meltdown happened and it all went to shit.

True dat.

> 
> But even today (on good hardware or with mitigations=off):
> 
> gettid-1m:	179,650,423      cycles
> xadd-1m:	 23,036,564      cycles
> 
> syscall is the cost of roughly 8 atomic ops. More expensive, sure. But
> not insanely so. I've seen atomic ops go up to >1000 cycles if you
> contend them hard enough.
> 

This has been your argument for over a decade, and the real world has seen
it differently. Performance matters significantly for user applications, and
if system calls didn't have performance issues, I'm sure the performance
centric applications would have used them.

This is because these critical sections run much less than 8 atomic ops. And
when you are executing these critical sections millions of times a second,
that adds up quickly.

-- Steve





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux