Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Boris,

Am 27.10.23 um 12:17 schrieb Boris Brezillon:
Hi Christian,

On Fri, 27 Oct 2023 11:06:44 +0200
Christian König <christian.koenig@xxxxxxx> wrote:

Am 27.10.23 um 10:22 schrieb Boris Brezillon:
On Fri, 27 Oct 2023 09:44:13 +0200
Christian König<christian.koenig@xxxxxxx>  wrote:
Am 27.10.23 um 09:39 schrieb Boris Brezillon:
On Fri, 27 Oct 2023 09:35:01 +0200
Christian König<christian.koenig@xxxxxxx>   wrote:
Am 27.10.23 um 09:32 schrieb Boris Brezillon:
On Fri, 27 Oct 2023 09:22:12 +0200
Christian König<christian.koenig@xxxxxxx>   wrote:
+
+	/**
+	 * @update_job_credits: Called once the scheduler is considering this
+	 * job for execution.
+	 *
+	 * Drivers may use this to update the job's submission credits, which is
+	 * useful to e.g. deduct the number of native fences which have been
+	 * signaled meanwhile.
+	 *
+	 * The callback must either return the new number of submission credits
+	 * for the given job, or zero if no update is required.
+	 *
+	 * This callback is optional.
+	 */
+	u32 (*update_job_credits)(struct drm_sched_job *sched_job);
Why do we need an extra callback for this?

Just document that prepare_job() is allowed to reduce the number of
credits the job might need.
->prepare_job() is called only once if the returned fence is NULL, but
we need this credit-update to happen every time a job is considered for
execution by the scheduler.
But the job is only considered for execution once. How do you see that
this is called multiple times?
Nope, it's not. If drm_sched_can_queue() returns false, the scheduler
will go look for another entity that has a job ready for execution, and
get back to this entity later, and test drm_sched_can_queue() again.
Basically, any time drm_sched_can_queue() is called, the job credits
update should happen, so we have an accurate view of how many credits
this job needs.
Well, that is the handling which I already rejected because it creates
unfairness between processes. When you consider the credits needed
*before* scheduling jobs with a lower credit count are always preferred
over jobs with a higher credit count.
My bad, it doesn't pick another entity when an entity with a
ready job that doesn't fit the queue is found, it just bails out from
drm_sched_rq_select_entity_rr() and returns NULL (AKA: no ready entity
found). But we still want to update the job credits before checking if
the job fits or not (next time this entity is tested).
Why? I only see a few possibility here:

1. You need to wait for submissions to the same scheduler to finish
before the one you want to push to the ring.
      This case can be avoided by trying to cast the dependency fences to
drm_sched_fences and looking if they are already scheduled.

2. You need to wait for submissions to a different scheduler instance
and in this case you should probably return that as dependency instead.
It's already described as a dependency, but native dependency waits are
deferred to the FW (we wait on scheduled to run the job, not finished).
The thing is, after ->prepare_job() returned NULL (no non-native deps
remaining), and before ->run_job() gets called, there might be several
of these native deps that get signaled, and we're still considering
job->submission_credits as unchanged, when each of these signalled
fence could have reduced the job credits, potentially allowing it to be
submitted sooner.

Ah, ok that at least clears up your intentions here.

Question is if that is really that important for you? I mean you just seem to fill up more of the ring buffer.


So to me it looks like when prepare_job() is called because it is
selected as next job for submission you should already know how many
credits it needs.
You know how many credits it needs when ->prepare_job() is called, but
if this number is too high, the entity will not be picked, and next
time it's checked, you'll still consider the job credits at the time
->prepare_job() was called, which might differ from the current job
credits (signalled native fences might have been signalled in the
meantime, and they could be evicted).

What you can do is to look at the credits of a job *after* it was picked
up for scheduling so that you can scheduler more jobs.
Sure, but then you might further delay your job if something made it
smaller (ie. native fences got signaled) between ->prepare_job() and
drm_sched_can_queue(). And any new drm_sched_can_queue() test would
just see the old credits value.

Out of curiosity, what are you worried about with this optional
->update_job_credits() call in the drm_sched_can_queue() path? Is the
if (sched->update_job_credits) overhead considered too high for drivers
that don't need it?
Since the dma_fences are also used for resource management the scheduler
is vital for correct system operation.

We had massively problems because people tried to over-optimize the
dma_fence handling which lead to very hard to narrow down memory
corruptions.

So for every change you come up here you need to have a very very good
justification. And just saving a bit size of your ring buffer is
certainly not one of them.
I fail to see how calling ->update_job_credits() changes the scheduler
behavior or how it relates to the sort memory corruption you're
referring to.

Yeah, you are right that's not even remotely related.

And yes, we can live with the overhead of making jobs
slightly bigger than they actually are, thus potentially delaying their
execution. It's just that I don't quite understand the rational behind
this conservatism, as I don't really see what negative impact this extra
->update_job_credits() call in the credit checking path has, other than
the slight overhead of an if-check for drivers that don't need it.

From experience it showed that we should not make the scheduler more complicated than necessary. And I still think that the ring buffers only need to be filled enough to keep the hardware busy.

If this here has some measurable positive effect then yeah we should probably do it, but as long as it's only nice to have I have some objections to that.

Regards,
Christian.


Regards,

Boris




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux