Hello, On Thu, Nov 24, 2022 at 02:32:25PM +0000, Tvrtko Ursulin wrote: > > Soft limits is a bit of misnomer and can be confused with best-effort limits > > such as memory.high. Prolly best to not use the term. > > Are you suggesting "best effort limits" or "best effort <something>"? It > would sounds good to me if we found the right <something>. Best effort > budget perhaps? A more conventional name would be hierarchical weighted distribution. > Also, when you mention scalability you are concerned about multiple tree > walks I have per iteration? I wasn't so much worried about that, definitely > not for the RFC, but even in general due relatively low frequency of > scanning and a good amount of less trivial cost being outside the actual > tree walks (drm client walks, GPU utilisation calculations, maybe more). But > perhaps I don't have the right idea on how big cgroups hierarchies can be > compared to number of drm clients etc. It's just a better way doing this kind of weight based scheduling. It's simpler, more scalable and easier to understand how things are working. The basic idea is pretty simple - each schedulable entity gets assigned a timestamp and whenever it consumes the target resource, its time is wound forward by the consumption amount divided by its absolute share - e.g. if cgroup A deserves 25% of the entire thing and it ran for 1s, its time is wound forward by 1s / 0.25 == 4s. There's a rbtree keyed by these timestamps and anything wanting to consume gets put on that tree and whatever is at the head of the tree is the next thing to run. Thanks. -- tejun