Re: [PATCH v2 4/9] drm/sched: Split free_job into own work item

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 12 Sep 2023 12:46:26 +0200
Danilo Krummrich <dakr@xxxxxxxxxx> wrote:

> > I'm a bit worried that leaving this single vs multi-threaded wq
> > decision to drivers is going to cause unnecessary pain, because what
> > was previously a granted in term of run/cleanup execution order (thanks
> > to the kthread+static-drm_sched_main-workflow approach) is now subject
> > to the wq ordering guarantees, which depend on the wq type picked by
> > the driver.  
> 
> Not sure if this ends up to be much different. The only thing I could think of
> is that IIRC with the kthread implementation cleanup was always preferred over
> run.

Given the sequence in drm_sched_main(), I'd say that cleanup and run
operations are naturally interleaved when both are available, but I
might be wrong.

> With a single threaded wq this should be a bit more balanced.

With a single threaded wq, it's less clear, because each work
reschedules itself for further processing, but it's likely to be more
or less interleaved. Anyway, I'm not too worried about cleanup taking
precedence on run or the other way around, because the limited amount
of HW slots (size of the ring-buffer) will regulate that.

> 
> With a multi-threaded wq it's still the same, but run and cleanup can run
> concurrently,

What I'm worried about is that ^. I'm not saying it's fundamentally
unsafe, but I'm saying drm_sched hasn't been designed with this
concurrency in mind, and I fear we'll face subtle bugs if we go from
kthread to multi-threaded-wq+run-and-cleanup-split-in-2-work-items.

> which has the nice side effect that free_job() gets out of the
> fence signaling path. At least as long as the workqueue has max_active > 1.

Oh, yeah, I don't deny using a multi-threaded workqueue has some
benefits, just saying it might be trickier than it sounds.

> Which is one reason why I'm using a multi-threaded wq in Nouveau.

Note that I'm using a multi-threaded workqueue internally at the moment
to deal with all sort of interactions with the FW (Mali HW only has a
limited amount of scheduling slots, and we need to rotate entities
having jobs to execute so every one gets a chance to run on the GPU),
but this has been designed this way from the ground up, unlike
drm_sched_main() operations, which were mostly thought as a fixed
sequential set of operations. That's not to say it's impossible to get
right, but I fear we'll face weird/unexpected behavior if we go from
completely-serialized to multi-threaded-with-pseudo-random-processing
order.

> 
> That latter seems a bit subtile, we probably need to document this aspect of
> under which conditions free_job() is or is not within the fence signaling path.

Well, I'm not even sure it can be clearly defined when the driver is
using the submit_wq for its own work items (which can be done since we
pass an optional submit_wq when calling drm_sched_init()). Sure, having
max_active >= 2 should be enough to guarantee that the free_job work
won't block the run_job one when these are the 2 only works being
queued, but what if you have many other work items being queued by the
driver to this wq, and some of those try to acquire resv locks? Could
this prevent execution of the run_job() callback, thus preventing
signaling of fences? I'm genuinely asking, don't know enough about the
cmwq implementation to tell what's happening when work items are
blocked (might be that the worker pool is extended to unblock the
situation).

Anyway, documenting when free_job() is in the dma signalling path should
be doable (single-threaded wq), but at this point, are we not better
off considering anything called from the submit_wq as being part of the
dma signalling path, so we can accommodate with both cases. And if
there is cleanup processing that require taking dma_resv locks, I'd be
tempted to queue that to a driver-specific wq (which is what I'm doing
right now), just to be safe.



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux