Re: [PATCH 01/18] drm/i915/execlists: Set queue priority from secondary port

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/04/2018 15:56, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-04-10 15:42:07)

On 10/04/2018 15:24, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-04-10 15:05:33)

On 09/04/2018 12:13, Chris Wilson wrote:
We can refine our current execlists->queue_priority if we inspect
ELSP[1] rather than the head of the unsubmitted queue. Currently, we use
the unsubmitted queue and say that if a subsequent request is more than
important than the current queue, we will rerun the submission tasklet

s/more than important/more important/

to evaluate the need for preemption. However, we only want to preempt if
we need to jump ahead of a currently executing request in ELSP. The
second reason for running the submission tasklet is amalgamate requests
into the active context on ELSP[0] to avoid a stall when ELSP[0] drains.
(Though repeatedly amalgamating requests into the active context and
triggering many lite-restore is off question gain, the goal really is to
put a context into ELSP[1] to cover the interrupt.) So if instead of
looking at the head of the queue, we look at the context in ELSP[1] we
can answer both of the questions more accurately -- we don't need to
rerun the submission tasklet unless our new request is important enough
to feed into, at least, ELSP[1].

References: f6322eddaff7 ("drm/i915/preemption: Allow preemption between submission ports")
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Michał Winiarski <michal.winiarski@xxxxxxxxx>
Cc: Michel Thierry <michel.thierry@xxxxxxxxx>
Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
    drivers/gpu/drm/i915/intel_lrc.c | 3 ++-
    1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3592288e4696..b47d53d284ca 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -713,7 +713,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
                        kmem_cache_free(engine->i915->priorities, p);
        }
    done:
-     execlists->queue_priority = rb ? to_priolist(rb)->priority : INT_MIN;
+     execlists->queue_priority =
+             port != execlists->port ? rq_prio(last) : INT_MIN;

Please bear with my questions since I am not 100% up to date with
preemption.

Should this be rq_prio("port[1]") for future proofing? Or you really
mean last port? In which case it wouldn't be the highest pending
priority as kerneldoc for execlists->queue_priority says.

I mean "secondary" port, so yes using last executing port under the
assumption that we grow into a ring of many useless submission ports.
The kerneldoc is no more or no less accurate. :)

"That we _don't_ grow"?

Hmm, no, when we get "last_port" it slots right into here. I just don't
have the future facing code to prevent Mika from having to think a
little. The intent is that when there is a ELSP slot available,
queue_priority is INT_MIN, when there are none, then rq_prio(last).

My bad for remembering what I want the code to be without remembering
what the code says.
Although I failed to understand what do we do in both cases if a new
request arrives of higher prio than the one in ELSP[1]. Looks like
nothing? Wait until GPU moves it to ELSP[0] and preempt then? Is this so
because we can't safely or I misread something?

This is covered by igt/gem_exec_schedule/preempt-queue*. If we receive a
request higher than ELSP[1], we start a preemption as

       if (need_preempt(engine, last, execlists->queue_priority)) {

will evaluate to true. It's either the lowest executing priority (new
code), or lowest pending priority (old code). In either case, if the new
request is more important than the queue_priority, we preempt.

How when "last" is request from ELSP[0]? And also
execlists->queue_priority has not yet been updated to reflect the new
priority?

When we start executing last on ELSP[0] there will have been another
execlists_dequeue() where we see an empty slot (or fill it) and update
queue_priority. If we are down to the last request, it will be set to
INT_MIN. Upon its completion, it will remain INT_MIN.

I don't see it yet, let me walk through it:

0. Initial situation, GPU busy with two requests, no outstanding ones:

ELSP[0] = prio 2
ELSP[1] = prio 0

1. queue_priority = 0
2. New execbuf comes along with prio=1.
3. We choose to schedule the tasklet - good.
4. execlists_dequeue runs

last = prio 2

if (need_preempt(engine, last, execlists->queue_priority))

queue_priority = 0, so will not preempt last which is prio 2 - so no preemption - good.

queue_priority remains at zero since we "goto unlock" with both ports busy and no preemption is triggered.

5. ELSP[0] completes, new ELSP[0] with prio 0.

(Before we missed the opportunity to replace ELSP[1] with higher prio 1 waiting request before ELSP[0] completed - perhaps we have no choice? But ok.. carrying on..)

6. execlist_dequeue

lasts = prio 0

if (need_preempt(engine, last, execlists->queue_priority))

queue_priority = 0, so again preemption not triggered.

Perhaps I made a mistake somewhere..

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux