Re: [PATCH 2/6] drm/i915: Retire requests along rings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2018-04-24 15:46:49)
> 
> On 24/04/2018 14:14, Chris Wilson wrote:
> > In the next patch, rings are the central timeline as requests may jump
> > between engines. Therefore in the future as we retire in order along the
> > engine timeline, we may retire out-of-order within a ring (as the ring now
> > occurs along multiple engines), leading to much hilarity in miscomputing
> > the position of ring->head.
> > 
> > As an added bonus, retiring along the ring reduces the penalty of having
> > one execlists client do cleanup for another (old legacy submission
> > shares a ring between all clients). The downside is that slow and
> > irregular (off the critical path) process of cleaning up stale requests
> > after userspace becomes a modicum less efficient.
> > 
> > In the long run, it will become apparent that the ordered
> > ring->request_list matches the ring->timeline, a fun challenge for the
> > future will be unifying the two lists to avoid duplication!
> > 
> > v2: We need both engine-order and ring-order processing to maintain our
> > knowledge of where individual rings have completed upto as well as
> > knowing what was last executing on any engine. And finally by decoupling
> > retiring the contexts on the engine and the timelines along the rings,
> > we do have to keep a reference to the context on each request
> > (previously it was guaranteed by the context being pinned).
> > 
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
> > ---
> > @@ -322,9 +325,9 @@ static void advance_ring(struct i915_request *request)
> >       } else {
> >               tail = request->postfix;
> >       }
> > -     list_del(&request->ring_link);
> > +     list_del_init(&request->ring_link);
> 
> Why the _init flavour? There are two list heads for being on two 
> separate lists, but are either path reachable after being removed from 
> the respective lists? (I may find the answer as I read on.)

Yes. rq are held elsewhere and may end up being individually retired
(via the retire_upto paths) multiple times. i915_request_retire() should
only be called once (otherwise asserts).

So init + list_empty() is being used to prevent retiring the same
request multiple times.

> > -     if (engine->last_retired_context)
> > -             engine->context_unpin(engine, engine->last_retired_context);
> > -     engine->last_retired_context = request->ctx;
> > -
> > -     spin_lock_irq(&request->lock);
> > -     if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &request->fence.flags))
> > -             dma_fence_signal_locked(&request->fence);
> > -     if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &request->fence.flags))
> > -             intel_engine_cancel_signaling(request);
> > -     if (request->waitboost) {
> > -             GEM_BUG_ON(!atomic_read(&request->i915->gt_pm.rps.num_waiters));
> > -             atomic_dec(&request->i915->gt_pm.rps.num_waiters);
> > -     }
> > -     spin_unlock_irq(&request->lock);
> > +     __retire_engine_upto(request->engine, request);
> >   
> 
> I did not figure out why do you need to do this. Normally retirement 
> driven from ring timelines would retire requests on the same engine 
> belonging to a different ring, as it walks over ring timelines.

Right, so retiring along a ring ends up out of order for a particular
engine.
 
> Only for direct callers of i915_request_retire it may make a difference 
> but I don't understand is it an functional difference or just optimisation.

I was hoping the GEM_BUG_ON(!list_is_first()); explained the rationale.

> This is then from where list_del_init comes from, since this function 
> can retire wider than the caller would expect. But then it retires on 
> the engine (upto) and the callers just walks the list and calls retire 
> only to find already unlinked elements. Could it just as well then 
> retire it completely?

We're still trying to only do as little work as we can get away with.

> > -     /* Move the oldest request to the slab-cache (if not in use!) */
> > -     rq = list_first_entry_or_null(&engine->timeline->requests,
> > -                                   typeof(*rq), link);
> > +     /* Move our oldest request to the slab-cache (if not in use!) */
> > +     rq = list_first_entry_or_null(&ring->request_list,
> > +                                   typeof(*rq), ring_link);
> 
> This is less efficient now - I wonder if you should still be looking at 
> the engine timeline here?

No, either way you have to walk all engine->requests upto this point, or
all ring->requests. To keep the interface manageable, retirement is in
ring order.

On the other hand, we don't retire as often if we can get away with it.
1 step forwards, 1 step backwards.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux