Re: [PATCH RFC 10/18] drm/scheduler: Add can_run_job callback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 08.03.23 um 10:41 schrieb Asahi Lina:
On 08/03/2023 17.46, Christian König wrote:
Am 07.03.23 um 15:25 schrieb Asahi Lina:
Some hardware may require more complex resource utilization accounting
than the simple job count supported by drm_sched internally. Add a
can_run_job callback to allow drivers to implement more logic before
deciding whether to run a GPU job.
Well complete NAK.

This is clearly going against the idea of having jobs only depend on
fences and nothing else which is mandatory for correct memory management.

If the hw is busy with something you need to return the fence for this
from the prepare_job callback so that the scheduler can be notified when
the hw is available again.
I think you misunderstood the intent here... This isn't about job
dependencies, it's about in-flight resource limits.

drm_sched already has a hw_submission_limit that specifies the number of
submissions that can be in flight, but that doesn't work for us because
each job from drm_sched's point of view consists of multiple commands
split among 3 firmware queues. The firmware can only support up to 128
work commands in flight per queue (barriers don't count), otherwise it
overflows a fixed-size buffer.

So we need more complex accounting of how many underlying commands are
in flight per queue to determine whether it is safe to run a new job,
and that is what this callback accomplishes. This has to happen even
when individual jobs have no buffer/resource dependencies between them
(which is what the fences would express).

Yeah, I already assumed that you have something like this.

And to make it clear this is unfortunately a complete NAK to this approach! You can't do this!

The background is that core memory management requires that signaling a fence only depends on signaling other fences and hardware progress and nothing else. Otherwise you immediately run into problems because of circle dependencies or what we call infinite fences.

Jason Ekstrand gave a create presentation on that problem a few years ago on LPC. I strongly suggest you google that one up.

You can see the driver implementation of that callback in
drivers/gpu/drm/asahi/queue/mod.rs (QueueJob::can_run()), which then
calls into drivers/gpu/drm/asahi/workqueue.rs (Job::can_submit()) that
does the actual available slot count checks.

The can_run_job logic is written to mirror the hw_submission_limit logic
(just a bit later in the sched main loop since we need to actually pick
a job to do the check), and just like for that case, completion of any
job in the same scheduler will cause another run of the main loop and
another check (which is exactly what we want here).

Yeah and that hw_submission_limit is based on a fence signaling again.

When you have some firmware limitation that a job needs resources which are currently in use by other submissions then those other submissions have fences as well and you can return those in the prepare_job callback.

If those other submissions don't have fences, then you have a major design problem inside your driver and we need to get back to square one and talk about that dependency handling.

This case (potentially scheduling more than the FW job limit) is rare
but handling it is necessary, since otherwise the entire job
completion/tracking logic gets screwed up on the firmware end and queues
end up stuck (I've managed to trigger this before).

Actually that's a pretty normal use case. I've have rejected similar requirements like this before as well.

For an example how this can work see amdgpu_job_prepare_job(): https://elixir.bootlin.com/linux/v6.3-rc1/source/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c#L251

The gang submit gives and example of a global fence lock and the VMIDs are an example of a global shared firmware resource.

Regards,
Christian.


~~ Lina




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux