Re: [PATCH 01/18] drm/i915/execlists: Set queue priority from secondary port

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2018-04-10 15:05:33)
> 
> On 09/04/2018 12:13, Chris Wilson wrote:
> > We can refine our current execlists->queue_priority if we inspect
> > ELSP[1] rather than the head of the unsubmitted queue. Currently, we use
> > the unsubmitted queue and say that if a subsequent request is more than
> > important than the current queue, we will rerun the submission tasklet
> 
> s/more than important/more important/
> 
> > to evaluate the need for preemption. However, we only want to preempt if
> > we need to jump ahead of a currently executing request in ELSP. The
> > second reason for running the submission tasklet is amalgamate requests
> > into the active context on ELSP[0] to avoid a stall when ELSP[0] drains.
> > (Though repeatedly amalgamating requests into the active context and
> > triggering many lite-restore is off question gain, the goal really is to
> > put a context into ELSP[1] to cover the interrupt.) So if instead of
> > looking at the head of the queue, we look at the context in ELSP[1] we
> > can answer both of the questions more accurately -- we don't need to
> > rerun the submission tasklet unless our new request is important enough
> > to feed into, at least, ELSP[1].
> > 
> > References: f6322eddaff7 ("drm/i915/preemption: Allow preemption between submission ports")
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Michał Winiarski <michal.winiarski@xxxxxxxxx>
> > Cc: Michel Thierry <michel.thierry@xxxxxxxxx>
> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
> > ---
> >   drivers/gpu/drm/i915/intel_lrc.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> > index 3592288e4696..b47d53d284ca 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -713,7 +713,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> >                       kmem_cache_free(engine->i915->priorities, p);
> >       }
> >   done:
> > -     execlists->queue_priority = rb ? to_priolist(rb)->priority : INT_MIN;
> > +     execlists->queue_priority =
> > +             port != execlists->port ? rq_prio(last) : INT_MIN;
> 
> Please bear with my questions since I am not 100% up to date with 
> preemption.
> 
> Should this be rq_prio("port[1]") for future proofing? Or you really 
> mean last port? In which case it wouldn't be the highest pending 
> priority as kerneldoc for execlists->queue_priority says.

I mean "secondary" port, so yes using last executing port under the
assumption that we grow into a ring of many useless submission ports.
The kerneldoc is no more or no less accurate. :)

> So this patch changes the meaning of "pending". From pending == "not 
> submitted to ELSP" to pending == "not submitted to ELSP[0]". Which seems 
> to make sense, although it is not the easiest job to figure out the 
> consequences.

Yes.
 
> It even feels like a bugfix since it prevents tasklet scheduling when 
> all ports are filled with higher priority requests than the new one.

It won't fix any bugs, since we just reduce the number of kicks. Kicking
and evaluating we have nothing to do is just wasted work. So yes I do
agree that it is a bug fix, just not enough to merit a Fixes.
 
> Although I failed to understand what do we do in both cases if a new 
> request arrives of higher prio than the one in ELSP[1]. Looks like 
> nothing? Wait until GPU moves it to ELSP[0] and preempt then? Is this so 
> because we can't safely or I misread something?

This is covered by igt/gem_exec_schedule/preempt-queue*. If we receive a
request higher than ELSP[1], we start a preemption as

	if (need_preempt(engine, last, execlists->queue_priority)) {

will evaluate to true. It's either the lowest executing priority (new
code), or lowest pending priority (old code). In either case, if the new
request is more important than the queue_priority, we preempt.

We won't evaluate preemption if we are still awaiting the HWACK from the
last ELSP write, or if we are still preempting. In both of those cases,
we expect to receive an interrupt promptly, upon which we then redo our
evaluations.

> Also, don't you need to manage execlists->queue_priority after CSB 
> processing now? So that it correctly reflects the priority of request in 
> ELSP[1] after ELSP[0] gets completed? It seems that without it would get 
> stuck at the previous value and then submission could decide to skip 
> scheduling the tasklet if new priority is lower than what was in ELSP[1] 
> before, and so would fail to fill ELSP[1].

Yes, that is also done here. Since we are always looking one request
ahead, we either update the priority based on the queue following the
resubmission on interrupt, or it is left as INT_MIN on completion.
Indeed, making sure we reset back to INT_MIN is essential so that we
don't any future submissions from idle.

We can add GEM_BUG_ON(queue_priority != INT_MIN) in engines_park to
check this condition.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux