Re: drm/scheduler for vc5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Eric,

nice to see that the scheduler gets used more and more.

The feature your need to solve both your binning/rendering as well as your MMU problem is dependency handling. See the "dependency" callback of the backend operations.

With this callback the driver can return dma_fences which need to signal (or at least be scheduled if it targets the same ring buffer/fifo).

Now you need dma_fences as result of your run_job callback for the binning step anyway. So when you return this fence from the binning step as dependency for your rendering step the scheduler does exactly what you want, e.g. not start the rendering before the binning is finished.


The same idea can be used for the MMU switch. As an example on how to do this see how the dependency callback is implemented in amdgpu_job_dependency():
    struct dma_fence *fence = amdgpu_sync_get_fence(&job->sync, &explicit);

First we get the "normal" dependencies from our sync object (a storage for fences).

...

    while (fence == NULL && vm && !job->vmid) {
        struct amdgpu_ring *ring = job->ring;

        r = amdgpu_vmid_grab(vm, ring, &job->sync,
                     &job->base.s_fence->finished,
                     job);
        if (r)
            DRM_ERROR("Error getting VM ID (%d)\n", r);

        fence = amdgpu_sync_get_fence(&job->sync, NULL);
    }

If we don't have any more "normal" dependencies left we call into the VMID subsystem to allocate an MMU for that job (we have 16 of those).

This call will pick a VMID and remember that the process of the job is now the owner of this VMID.
If the VMID previously didn't belonged to the process of the current job all fences of the old process are added to the job->sync object again.

So after having returned all "normal" dependencies we now return the one necessary to grab the hardware resource VMID.

Regards,
Christian.

Am 30.03.2018 um 22:05 schrieb Eric Anholt:
I've been keeping my eye on what's going on with drm/scheduler, and I'm
definitely interested in using it.  I've got some questions about how to
fit it to this HW, though.

For this HW, most rendering jobs have two phases: binning and rendering,
and the HW has two small FIFOs for descriptions of each type of job to
be submitted.  The bin portion must be completed before emitting the
render.  Some jobs may be render only, skipping the bin phase.

The render side is what takes most of the time.  However, you can
usually bin the next frame while rendering the current one, helping keep
your shared shader cores busy when you're parsing command lists.  The
exception is if the next bin depends on your last render (think
render-to-texture with texturing in a vertex shader).

This makes me think that I should expose two entities for the HW's
binner and the renderer.  Each VC6 job would have two drm_sched_job: The
render job would depend on the fence from the bin job, and bin may or
may not depend on the previous render.

However, as an extra complication, the MMU is shared between binner and
renderer, so I can't schedule a new job with a page table change until
the other side finishes up.  Is there a good way to express this with
drm/scheduler, or should I work around this by internally stalling my
job submissions to the HW when a page table change is needed, and then
trigger that page table swap and submit once a job completes?

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux