Re: [PATCH v6 00/34] GPU scheduler for i915 driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26/04/2016 14:20, Daniel Vetter wrote:
On Mon, Apr 25, 2016 at 10:54:27AM +0100, Chris Wilson wrote:
As each batch buffer completes, it raises an interrupt which wakes up
the scheduler. Note that it is possible for multiple buffers to
complete before the IRQ handler gets to run. Further, the seqno values
of the individual buffers are not necessary incrementing as the
scheduler may have re-ordered their submission. However, the scheduler
keeps the list of executing buffers in order of hardware submission.
Thus it can scan through the list until a matching seqno is found and
then mark all in flight nodes from that point on as completed.
No. You haven't built your timelines correctly. The idea behind the
timeline is that a request can wait upon its seqno and check it against
its own timeline, which is ordered only with other requests on its
timeline. Because of this, it is independent of whatever order the
scheduler executes the timelines in, each timeline is ordered.

A request can simply wait for its timeline to advance, completely
ignorant of the scheduler. (Request signaling may be driven by the
scheduler, but that is a lowlevel, not GEM or dma-buf/fence,
implementation detail. And only if the scheduler is coupled into the
user-interupt, but on execlists it will be using the context-switch
interrupt to driver itself, and for ringbuffer mode we have a choice of
user-interrupt or using pipe-control/dw-notify to keep the paths
separate.)
This is rather crucial, since that expectations that other drivers can
rely on fence->seqno being ordered correctly within one timeline. And e.g.
amdgpu does what Chris describes and collapses fences on one timeline to
just one.

We do have to fix this before we can enable the scheduler.
As I have stated several times, this is not broken. The fence series might not introduce per context seqnos for the entire driver but it does introduce a per context timeline specifically for the fence. Thus it is perfectly safe to collapse fences that exist on the same timeline as their internal seqno values are guaranteed to be properly ordered. The fact that the fence's software seqno is not the same value as the driver's hardware seqno is irrelevant to any external view and will not cause a problem.

When the driver has been updated to support per context hardware seqnos, it would be trivial to drop the fence's software timeline and switch to using the hardware one. That has always been the plan.


The related issues with using struct fence (request) more is that we need
that also for android integration. On mutli-gpu desktops we already have
different kinds of fences, but real soon we'll also have different kinds
of fences (gem requests and kms vblank/flip complete events) on just one
gpu, and on android.

[more cut]

At the fundamental level it looks like you have not introduced timelines
correctly or introduced the scheduler as a separate entity for deciding
which request to execute next (or if this request should preempt execution).
Reworking the scheduler to take request and in-fences, and correctly use
timelines is definitely the way to go.

The scheduler has been taking 'in-fences' since it's start on Android. That was one of the reasons the whole piece of work was first begun. Before the de-staging patches were dropped from the struct fence series (due to being picked up by external contributors) this scheduler patch series included adding support for passing an external fence in to the execbuf code path to have the driver wait on before allowing that work to begin execution. Once the external work has fully landed and the scheduler patches have been rebased ontop, that support will be back in and is actually really quite trivial. Any resource which has a 'am I ready yet' test can be used as a dependency check on a request when the scheduler is decided what to execute next. Adding in new internal or external dependencies is pretty easy.




/me out

Cheers, Daniel

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux