Currently, for operations like memory clear or copy for big chunks of memory, we generate multiple requests executed in a chain. But if one of the requests generated fails we would not know it to unless it happens to the last request, because errors are not properly propagated. For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request. This way we would know that the memory operation has failed and whether the memory is still invalid. On copy and clear migration signal fences upon completion. Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld <matthew.auld@xxxxxxxxx> Suggested-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Signed-off-by: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx --- drivers/gpu/drm/i915/gt/intel_migrate.c | 31 +++++++++++++++++-------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..8a293045a7b96 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -748,7 +748,7 @@ intel_context_migrate_copy(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; } if (deps) { @@ -878,10 +878,14 @@ intel_context_migrate_copy(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq: - if (*out) - i915_request_put(*out); - *out = i915_request_get(rq); + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); i915_request_add(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); + i915_request_put(*out); + } + *out = rq; if (err) break; @@ -905,7 +909,8 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1); -out_ce: + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; } @@ -1005,7 +1010,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; } if (deps) { @@ -1056,17 +1061,23 @@ intel_context_migrate_clear(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq: - if (*out) - i915_request_put(*out); - *out = i915_request_get(rq); + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); i915_request_add(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); + i915_request_put(*out); + } + *out = rq; + if (err || !it.sg || !sg_dma_len(it.sg)) break; cond_resched(); } while (1); -out_ce: + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; } -- 2.39.1