On Wed, Oct 7, 2020 at 3:36 AM Qais Yousef <qais.yousef@xxxxxxx> wrote: > > On 10/06/20 13:04, Rob Clark wrote: > > On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef <qais.yousef@xxxxxxx> wrote: > > > > > > On 10/05/20 16:24, Rob Clark wrote: > > > > > > [...] > > > > > > > > RT planning and partitioning is not easy task for sure. You might want to > > > > > consider using affinities too to get stronger guarantees for some tasks and > > > > > prevent cross-talking. > > > > > > > > There is some cgroup stuff that is pinning SF and some other stuff to > > > > the small cores, fwiw.. I think the reasoning is that they shouldn't > > > > be doing anything heavy enough to need the big cores. > > > > > > Ah, so you're on big.LITTLE type of system. I have done some work which enables > > > biasing RT tasks towards big cores and control the default boost value if you > > > have util_clamp and schedutil enabled. You can use util_clamp in general to > > > help with DVFS related response time delays. > > > > > > I haven't done any work to try our best to pick a small core first but fallback > > > to big if there's no other alternative. > > > > > > It'd be interesting to know how often you end up on a big core if you remove > > > the affinity. The RT scheduler picks the first cpu in the lowest priority mask. > > > So it should have this bias towards picking smaller cores first if they're > > > in the lower priority mask (ie: not running higher priority RT tasks). > > > > fwiw, the issue I'm looking at is actually at the opposite end of the > > spectrum, less demanding apps that let cpus throttle down to low > > OPPs.. which stretches out the time taken at each step in the path > > towards screen (which seems to improve the odds that we hit priority > > inversion scenarios with SCHED_FIFO things stomping on important CFS > > things) > > So you do have the problem of RT task preempting an important CFS task. > > > > > There is a *big* difference in # of cpu cycles per frame between > > highest and lowest OPP.. > > To combat DVFS related delays, you can use util clamp. > > Hopefully this article helps explain it if you didn't come across it before > > https://lwn.net/Articles/762043/ > > You can use sched_setattr() to set SCHED_FLAG_UTIL_CLAMP_MIN for a task. This > will guarantee everytime this task is running it'll appear it has at least > this utilization value, so schedutil governor (which must be used for this to > work) will pick up the right performance point (OPP). > > The scheduler will try its best to make sure that the task will run on a core > that meets the minimum requested performance point (hinted by setting > uclamp_min). Yeah, I think we will end up making some use of uclamp.. there is someone else working on that angle But without it, this is a case that exposes legit prioritization problems with commit_work which we should fix ;-) BR, -R > > Thanks > > -- > Qais Yousef