Re: [PATCHSET v6] sched: Implement BPF extensible scheduler class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 14, 2024 at 01:07:15AM +0100, Qais Yousef wrote:
> On 05/13/24 14:26, Steven Rostedt wrote:

> > > That is, from where I am sitting I see $vendor mandate their $enterprise
> > > product needs their $BPF scheduler. At which point $vendor will have no
> > > incentive to ever contribute back.
> > 
> > Believe me they already have their own scheduler, and because its so
> > different, it's very hard to contribute back.

'They' are free to have their own scheduler, but since 'nobody' is using
it and 'they' want to have their product work on RHEL / SLES / etc..
therefore are bound to respect the common interfaces, no?

> > > So I don't at all mind people playing around with schedulers -- they can
> > > do so today, there are a ton of out of tree patches to start or learn
> > > from, or like I said, it really isn't all that hard to just rip out fair
> > > and write something new.
> > 
> > For cloud servers, I bet a lot of schedulers are not public. Although,
> > my company tries to publish the schedulers they use.

Yeah, it's the TIVO thing. Keeping all that private creates the rebase
pain. Outside of that there's nothing we can do.

Anyway, instead of doing magic mushroom schedulers, what does the cloud
crud actually want? I know the KVM people were somewhat looking forward
to the EEVDF sched_attr::sched_runtime extension because virt likes the
longer slices. Less preemption more better for them.

In fact, some of the facebook workloads also wanted longer slices (and
no wakeup preemption).

> > From what I understand (I don't work on production, but Chromebooks), a
> > lot of changes cannot be contributed back because their updates are far
> > from what is upstream. Having a plugable scheduler would actually allow
> > them to contribute *more*.

So can we please start by telling what kind of magic hacks ChromeOS has
and whatfor?

The term contributing seems to mean different things to us. Building a
external scheduler isn't contributing, it's fragmenting.

> > > Keeping a rando github repo with BPF schedulers is not contributing.
> > 
> > Agreed, and I would guess having them in the Linux kernel tree would be
> > more beneficial.

Yeah, no. Same thing. It's just a pile of junk until someone puts the
time in to figure out how to properly integrate it. Very much like Qais
argues below.

> > > That's just a repo with multiple out of tree schedulers to be ignored.
> > > Who will put in the effort of upsteaming things if they can hack up a
> > > BPF and throw it over the wall?
> > 
> > If there's a place in the Linux kernel tree, I'm sure there would be
> > motivation to place it there. Having it in the kernel proper does give
> > more visibility of code, and therefore enhancements to that code. This
> > was the same rationale for putting perf into the kernel proper.

These things are very much not the same. A pile of random hacks vs a
single unified interface to PMUs. They're like the polar opposite of
one another.

> > > So yeah, I'm very much NOT supportive of this effort. From where I'm
> > > sitting there is simply not a single benefit. You're not making my life
> > > better, so why would I care?
> > > 
> > > How does this BPF muck translate into better quality patches for me?
> > 
> > Here's how we will be using it (we will likely be porting sched_ext to
> > ChromeOS regardless of its acceptance).
> > 
> > Doing testing of scheduler changes in the field is extremely time
> > consuming and complex. We tested EEVDF vs CFS by backporting EEVDF to
> > 5.15 (as that is the kernel version we are using on the chromebooks we

/me mumbles something about necro-kernels...

> > were testing on), and then we need to add a user space "switch" to
> > change the scheduler. Note, this also risks causing a bug in adding
> > these changes. Then we push the kernel out, and then start our
> > experiment that enables our feature to a small percentage, and slowly
> > increases the number of users until we have a enough for a statistical
> > result.
> > 
> > What sched_ext would give us is a easy way to try different scheduling
> > algorithms and get feedback much quicker. Once we determine a solution
> > that improves things, we would then spend the time to implement it in
> > the scheduler, and yes, send it upstream.

This sounds a little backwards... ok, a lot. How do you do actual
problem analysis in this case? Having random statistics is not really
useful - beyond determining there might be a problem.

The next step is isolating that problem locally and reproducing it. Then
analysing *what* the actual problem is and how it happens, and then try
and think of a solution.

(preferably one that then doesn't break another thing :-)

> > To me, sched_ext should never be the final solution, but it can be
> > extremely useful in testing various changes quickly in the field. Which
> > to me would encourage more contributions.

Well, the thing is, the moment sched_ext itself lands upstream, it will
become the final solution for a fair number of people and leave us, the
wider Linux scheduler community, up a creek without no paddles on.

There is absolutely no inherent incentive to further contribute. Your
immediate problem is solved, you get assigned the next problem. That is
reality.

Worse, they can share the BPF hack and get warm fuzzy feeling of
'contribution' while in fact it's useless. At best we know 'random hack
changed something for them'. No problem description, no reproducer, no
nothing.

Anyway, if you feel you need BPF hackery to do this, by all means, do
so. But realize that it is a debug tool and in general we don't merge
debug tools.

Also, I would argue that perhaps a scheduler livepatch would be more
convenient to actually debug / A-B test things.

> I really don't think the problems we have are because of EEVDF vs CFS vs
> anything else. Other major OSes have one scheduler, but what they exceed on is
> providing better QoS interfaces and mechanism to handle specific scenarios that
> Linux lacks.

Quite possibly. The immediate problem being that adding interfaces is
terrifying. Linus has a rather strong opinion about breaking stuff, and
getting this wrong will very quickly result in a paint-into-corner type
problem.

We can/could add fields to sched_attr under the understanding that
they're purely optional and try thing, *however* too many such fields
and we're up a creek again.

> The confusion I see again and again over the years is the fragmentation of
> Linux eco system and app writers don't know how to do things properly on Linux
> vs other OSes. Note our CONFIG system is part of this fragmentation.
> 
> The addition of more flavours which inevitably will lead to custom QoS specific
> to that scheduler and libraries built on top of it that require that particular
> extension available is a recipe for more confusion and fragmentation.

Yes, this!

> I really don't buy the rapid development aspect too. The scheduler was heavily
> influenced by the early contributors which come from server market that had
> (few) very specific workloads they needed to optimize for and throughput had
> a heavier weight vs latency. Fast forward to now, things are different. Even on
> server market latency/responsiveness has become more important. Power and
> thermal are important on a larger class of systems now too. I'd dare say even
> on server market.

Absolutely, AFAIU racks are both power and thermal limited. There are
some crazy ACPI protocols to manage some of this.

> How do you know when it's okay for an app/task to consume too
> much power and when it is not? Hint hint, you can't unless someone in userspace
> tells you.

Yes, cluster/cloud infrastructure needs to manage that. There is nothing
smart the kernel can do here on its own, except respect the ACPI lunacy
and hard throttle itself when the panic signal comes.

> Similarly for latency vs throughput. What is the correct way to
> write an application to provide this info? Then we can ask what is missing in
> the scheduler to enable this.

Right, so the EEVDF thing is a start here. By providing a per task
request size, applications can indicate if they want frequent and short
activations or more infrequent longer activations.

An application can know it's (average) activation time, the kernel has
no clue when work starts and is completed. Applications can fairly
trivially measure this using CLOCK_THREAD_CPUTIME_ID reads before and
after and communicate this (very much like SCHED_DEADLINE).

Anyway, yes, userspace needs to change and provide more information. The
trick ofcourse is figuring out which bit of information is critical /
useful etc.

There is a definite limit on the amount of constraints you want to solve
at runtime.

Everybody going off and hacking their own thing does not help, we need
collaboration to figure out what it is that is needed.

> Note the original min/wakeup_granularity_ns, latency_ns etc were tuned by
> default for throughput by the way (server market bias). You can manipulate
> those and get better latencies.

The immediate problem with those knobs is that they are system wide. But
yes, everybody was randomly poking them knobs, sometimes in obviously
insane ways.

> FWIW IMO the biggest issues I see in the scheduler is that its testability and
> debuggability is hard. I think BPF can be a good fit for that. For the latter
> I started this project, yet I am still trying to figure out how to add tracer
> for the difficult paths to help people more easily report when a bad decision
> has happened to provide more info about the internal state of the scheduler, in
> hope to accelerate the process of finding solutions. 

So the pitfalls here are that exposing that information for debug
purposes can/will lead to people consuming this information for
non-debug purposes and then when we want to change things we're stuck
because suddenly someone relies something we believed was an
implementation detail :/

I've been bitten by this before and this is why I'm so very hesitant to
put tracepoints in the scheduler.

> I think it would be great to have a clear list of the current limitations
> people see in the scheduler. It could be a failure on my end, but I haven't
> seen specifics of problems and what was tried and failed to the point it is
> impossible to move forward. 

Right, list, but also ideally reproducers (yeah, I know, really hard).

The moment we merge sched_ext all motivation to do any of this work goes
out the window.

> From what I see, I am hitting bugs here and there
> all the time. But they are hard to debug to truly understand where things went
> wrong. Like this one for example where PTHREAD_PRIO_PI is a NOP for fair tasks.
> Many thought using this flag doesn't help (rather than buggy)..

Yay for the terminal backlog :/ I'll try and have a look.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux