Re: [PATCH 1/2] drm/scheduler: improve GPU scheduler documentation v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 16.11.23 um 23:23 schrieb Danilo Krummrich:
[SNIP]

      
+ *
+ * The lifetime of the scheduler is managed by the driver using it. Before
+ * destroying the scheduler the driver must ensure that all hardware processing
+ * involving this scheduler object has finished by calling for example
+ * disable_irq(). It is *not* sufficient to wait for the hardware fence here
+ * since this doesn't guarantee that all callback processing has finished.
This is the part I'm most concerned about, since I feel like we leave drivers
"up in the air" entirely. Hence, I think here we need to be more verbose and
detailed about the options drivers have to ensure that.

For instance, let's assume we have the single-entity-per-scheduler topology
because the driver only uses the GPU scheduler to feed a firmware scheduler with
dynamically allocated ring buffers.

In this case the entity, scheduler and ring buffer are bound to the lifetime of
a userspace process.

What do we expect the driver to do if the userspace process is killed? As you
mentioned, only waiting for the ring to be idle (which implies all HW fences
are signalled) is not enough. This doesn't guarantee all the free_job()
callbacks have been called yet and hence stopping the scheduler before the
pending_list is actually empty would leak the memory of the jobs on the
pending_list waiting to be freed.

I already brought this up when we were discussing Matt's Xe inspired scheduler
patch series and it seems there was no interest to provide drivers with some
common mechanism that gurantees that the pending_list is empty. Hence, I really
think we should at least give recommendations how drivers should deal with that.

I put this work on hold to have time looking deeper into this and trying to find alternative ways for the handling.

I think this is another good reason why the scheduler should really not be involved in freeing jobs, but let's first discuss another issue with this.

It goes far down into the underlying dma_fence mechanism which gives you guarantees that hardware operations have finished, but not that the associated software callbacks are done.

So what happens is that components like the scheduler can't just wait for dma_fences to be sure that a registered callback are not executed on another CPU.

See this patch here for another example where this totally bites us in drivers, completely independent of the GPU scheduler:

commit 7c703a7d3f2b50a6187267420a4d3d7e62fa3206
Author: xinhui pan <xinhui.pan@xxxxxxx>
Date:   Tue Apr 12 19:52:16 2022 +0800

    drm/amdgpu: Fix one use-after-free of VM

Basically the solution amdgpu came up with is to take and drop the spinlock of the underlying dma_fence context:

/* Make sure that all fence callbacks have completed */
spin_lock_irqsave(vm->last_tlb_flush->lock, flags);
spin_unlock_irqrestore(vm->last_tlb_flush->lock, flags);

But this is just a hack only works because amdgpu knows the internals of his own dma_fence implementation.

For the scheduler this is not applicable. I've mentioned this problem to Daniel before, but at least at this time he thought that this is a complete driver problem.

Ideas?

Regards,
Christian.


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux