Hello, On Wed, Jun 26, 2024 at 10:23:42AM +0200, Peter Zijlstra wrote: ... > - cpuset > - cpuset-v2 > - isolcpus boot crap > > And they're all subtly different iirc, but IIRC the cpuset ones are > simplest since the task is part of a cgroup and the cgroup cpumask is > imposed on them and things should be fairly straight forward. > > The isolcpus thing creates a pile of single CPU partitions and people > have to manually set cpu-affinity, and here we have some hysterical > behaviour that I would love to change but have not yet dared do -- > because I know there's people doing dodgy things because they've been > sending 'bug' reports. > > Specifically it is possible to set a cpumask that spans multiple > partitions :-( Traditionally the behaviour was that it would place the > task on the lowest cpu number, the current behaviour is the task it > placed randomly on any CPU in the given mask. This is what I was missing. I was just thinking cpuset case and as cpuset partitions are always reflected in the task cpumasks, there isn't whole lot to do. ... > > While it would > > make sense to communicate partitions to the BPF scheduler, would it make > > sense to reject BPF scheduler based on it? ie. Assuming that the feature is > > implemented, what would distinguish between one BPF scheduler which handles > > partitions specially and the other which doesn't care? > > Correctness? Anyway, can't you handle this in the kernel part, simply > never allow a shared runqueue to cross a root_domain's mask and put some > WARNs on to ensure constraints are respected etc.? Should be fairly > simple to check prev_cpu and new_cpu are having the same root_domain for > instance. Yeah, I'll plug it. It might as well be just reject and ejecting BPF schedulers when conditions are detected. The BPF scheduler doesn't have to use the built-in DSQs and can decide to dispatch to any CPU from its BPF queues (however that may be implemented, it can also be in userspace), so it's a bit tricky to enforce correctness dynamically after the fact. I'll think more on it. Thanks. -- tejun