Quoting Tvrtko Ursulin (2019-11-22 13:01:56) > > On 22/11/2019 11:21, Chris Wilson wrote: > > Before checking the current i915_active state for the asynchronous work > > we submitted, flush any ongoing callback. This ensures that our sampling > > is robust and does not sporadically fail due to bad timing as the work > > is running on another cpu. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/gt/selftest_context.c | 19 +++++++++++++------ > > 1 file changed, 13 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c > > index 3586af636304..939798338242 100644 > > --- a/drivers/gpu/drm/i915/gt/selftest_context.c > > +++ b/drivers/gpu/drm/i915/gt/selftest_context.c > > @@ -48,20 +48,25 @@ static int context_sync(struct intel_context *ce) > > > > mutex_lock(&tl->mutex); > > do { > > - struct dma_fence *fence; > > + struct i915_request *rq; > > long timeout; > > > > - fence = i915_active_fence_get(&tl->last_request); > > - if (!fence) > > + if (list_empty(&tl->requests)) > > break; > > > > - timeout = dma_fence_wait_timeout(fence, false, HZ / 10); > > + rq = list_last_entry(&tl->requests, typeof(*rq), link); > > + i915_request_get(rq); > > + > > + timeout = i915_request_wait(rq, 0, HZ / 10); > > if (timeout < 0) > > err = timeout; > > else > > - i915_request_retire_upto(to_request(fence)); > > + i915_request_retire_upto(rq); > > > > - dma_fence_put(fence); > > + spin_lock_irq(&rq->lock); > > + spin_unlock_irq(&rq->lock); > > Ugh.. this at least needs a comment. I admit it also should not be required with the swap to list_last_entry. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx