On Tue, Sep 19, 2017 at 11:07:43AM +0100, Chris Wilson wrote: > Quoting Tvrtko Ursulin (2017-09-19 10:28:42) > > > > On 18/09/2017 17:27, Chris Wilson wrote: > > > If preemption occurs at precisely the right moment, we may decide that > > > the wait is complete even though the wait's request is no longer > > > executing (having been preempted). We handle this situation by double > > > checking that request following deciding whether the wait is complete. > > > > > > Reported-by: Michał Winiarski <michal.winiarski@xxxxxxxxx > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > > Cc: Michał Winiarski <michal.winiarski@xxxxxxxxx> > > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > > --- > > > drivers/gpu/drm/i915/i915_irq.c | 7 +++++-- > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > > > index bb69c5b0efc4..7a53d94b7e61 100644 > > > --- a/drivers/gpu/drm/i915/i915_irq.c > > > +++ b/drivers/gpu/drm/i915/i915_irq.c > > > @@ -1053,10 +1053,13 @@ static void notify_ring(struct intel_engine_cs *engine) > > > */ > > > if (i915_seqno_passed(intel_engine_get_seqno(engine), > > > wait->seqno)) { > > > + struct drm_i915_gem_request *waiter = wait->request; > > > + > > > wakeup = true; > > > if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, > > > - &wait->request->fence.flags)) > > > - rq = i915_gem_request_get(wait->request); > > > + &waiter->fence.flags) && > > > + intel_wait_check_request(wait, waiter)) > > > + rq = i915_gem_request_get(waiter); > > > } > > > > > > if (wakeup) > > > > > > > Hm but as the user interrupt is nor serialized to exelists, what > > prevents the preemption to happen after the intel_wait_check_request and > > before dma_fence_signal? > > The preemption is serialized to the breadcrumb, so that our > understanding of i915_gem_request_completed() is accurate. It would be > a nasty bug if we flagged DMA_FENCE_FLAG_SIGNALED_BIT prior to > i915_gem_request_completed(). > > Here, wait_check_request just confirms that the wait->seqno still > matches the rq->global_seqno. So as we have already found that the > breadcrumb has passed wait->seqno, that means that the request is ready > to wake up. We only do the wake avoidance if we believe that we can > trust the order of MI_USER_INTERRUPT with the breadcrumb write, so that > then we know that if the request is not yet ready, we will receive > another interrupt in the future. (On the other side, if the wait queue > is manipulated, it ensures that the new waiter is woken to do the > breadcrumb check to avoid missed interrupts.) > > So... If the preemption happens after intel_wait_check_request, it does > not affect the prior completed request. Reviewed-by: Michał Winiarski <michal.winiarski@xxxxxxxxx> -Michał > -Chris > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx