This is essentially part of drm_sched_dependency_optimized(), which only amdgpu seems to make use of. Use it a bit more. This would mean that as-is amdgpu can't use the dependency helpers, at least not with the current approach amdgpu has for deciding whether a vm_flush is needed. Since amdgpu also has very special rules around implicit fencing it can't use those helpers either, and adding a drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too onerous. That way the special case handling for amdgpu sticks even more out and we have higher chances that reviewers that go across all drivers wont miss it. Reviewed-by: Lucas Stach <l.stach@xxxxxxxxxxxxxx> Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> Cc: "Christian König" <christian.koenig@xxxxxxx> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> Cc: Luben Tuikov <luben.tuikov@xxxxxxx> Cc: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx> Cc: Alex Deucher <alexander.deucher@xxxxxxx> Cc: Jack Zhang <Jack.Zhang1@xxxxxxx> --- drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index ad62f1d2991c..db326a1ebf3c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -654,6 +654,13 @@ int drm_sched_job_await_fence(struct drm_sched_job *job, if (!fence) return 0; + /* if it's a fence from us it's guaranteed to be earlier */ + if (fence->context == job->entity->fence_context || + fence->context == job->entity->fence_context + 1) { + dma_fence_put(fence); + return 0; + } + /* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs. -- 2.32.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx