Re: drm/scheduler for vc5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian König <christian.koenig@xxxxxxx> writes:

> Hi Eric,
>
> nice to see that the scheduler gets used more and more.
>
> The feature your need to solve both your binning/rendering as well as 
> your MMU problem is dependency handling. See the "dependency" callback 
> of the backend operations.
>
> With this callback the driver can return dma_fences which need to signal 
> (or at least be scheduled if it targets the same ring buffer/fifo).
>
> Now you need dma_fences as result of your run_job callback for the 
> binning step anyway. So when you return this fence from the binning step 
> as dependency for your rendering step the scheduler does exactly what 
> you want, e.g. not start the rendering before the binning is finished.
>
>
> The same idea can be used for the MMU switch. As an example on how to do 
> this see how the dependency callback is implemented in 
> amdgpu_job_dependency():
>>     struct dma_fence *fence = amdgpu_sync_get_fence(&job->sync, 
>> &explicit);
>
> First we get the "normal" dependencies from our sync object (a storage 
> for fences).
>
> ...
>>     while (fence == NULL && vm && !job->vmid) {
>>         struct amdgpu_ring *ring = job->ring;
>>
>>         r = amdgpu_vmid_grab(vm, ring, &job->sync,
>>                      &job->base.s_fence->finished,
>>                      job);
>>         if (r)
>>             DRM_ERROR("Error getting VM ID (%d)\n", r);
>>
>>         fence = amdgpu_sync_get_fence(&job->sync, NULL);
>>     }
>
> If we don't have any more "normal" dependencies left we call into the 
> VMID subsystem to allocate an MMU for that job (we have 16 of those).
>
> This call will pick a VMID and remember that the process of the job is 
> now the owner of this VMID. If the VMID previously didn't belonged to 
> the process of the current job all fences of the old process are added 
> to the job->sync object again.

This makes some sense when you have many VMIDs and reuse won't happen
very often.  I'm concerned that when I effectively have one VMID that I
need to keep swapping, then we're creating a specific serialization of
the jobs at the time they're submitted to the kernel (dependency()
callback) rather than when the scheduler decides it would like to submit
to the HW (run_job() callback after deciding on a job based on
priority).

Attachment: signature.asc
Description: PGP signature

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux