Re: [PATCH v5 14/20] drm/sched: Don't store self-dependencies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 05.08.21 um 15:25 schrieb Daniel Vetter:
On Thu, Aug 5, 2021 at 3:18 PM Christian König <christian.koenig@xxxxxxx> wrote:


Am 05.08.21 um 12:46 schrieb Daniel Vetter:
This is essentially part of drm_sched_dependency_optimized(), which
only amdgpu seems to make use of. Use it a bit more.

This would mean that as-is amdgpu can't use the dependency helpers, at
least not with the current approach amdgpu has for deciding whether a
vm_flush is needed. Since amdgpu also has very special rules around
implicit fencing it can't use those helpers either, and adding a
drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too
onerous. That way the special case handling for amdgpu sticks even
more out and we have higher chances that reviewers that go across all
drivers wont miss it.
Well you should probably drop the sentence about the vm_flush, this is
completely unrelated.

Additional to that I still don't think that this is a good idea.
Dependency handling is something completely driver specific.

E.g. even when you have submitted jobs back to back they still might
need a cache flush in between and that is not only for amdgpu like this.

What you can do is to optimize for while looking at the fences later on
and then note that you have done so and what the last hw fence is you
used instead.
Out of 6 drivers using drm/sched 5 can use this. When we get i915
over, that one will be added to the list. amdgpu can't use any of this
anyway due to the vm_id allocation requirements, which is why I
mention that. Also note that all the callbacks are still there, so you
can just ignore this all and still build your own. Like amdgpu does.

The VMID allocation stuff is rather easy to handle, that's why I noted we should remove that sentence.

The problematic stuff is handling the cache flush and pipeline sync which you make impossible with this here.

So I'm not sure what exactly your object is, aside from "this doesn't
fit for amdgpu", which a) I know b) the commit message explains c)
doesn't actually hurt amdgpu in the slightest. And we still get the
benefit that for most drivers it's a nice optimization.

Well exactly that's what I wanted to avoid. We still can use this in amdgpu even with the VMID allocation stuff and I still hope to do so.

Can't we add this as a wrapper or similar?

Christian.

-Daniel

Regards,
Christian.

Reviewed-by: Lucas Stach <l.stach@xxxxxxxxxxxxxx>
Acked-by: Melissa Wen <mwen@xxxxxxxxxx>
Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx>
Cc: "Christian König" <christian.koenig@xxxxxxx>
Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>
Cc: Luben Tuikov <luben.tuikov@xxxxxxx>
Cc: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx>
Cc: Alex Deucher <alexander.deucher@xxxxxxx>
---
   drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++
   1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index f77456929139..49e507f91ec0 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -660,6 +660,13 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job,
       if (!fence)
               return 0;

+     /* if it's a fence from us it's guaranteed to be earlier */
+     if (fence->context == job->entity->fence_context ||
+         fence->context == job->entity->fence_context + 1) {
+             dma_fence_put(fence);
+             return 0;
+     }
+
       /* Deduplicate if we already depend on a fence from the same context.
        * This lets the size of the array of deps scale with the number of
        * engines involved, rather than the number of BOs.





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux