On 23/04/2018 10:06, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-04-23 09:47:57)
On 20/04/2018 14:20, Chris Wilson wrote:
void i915_retire_requests(struct drm_i915_private *i915)
{
- struct intel_engine_cs *engine;
- enum intel_engine_id id;
+ struct intel_ring *ring, *next;
lockdep_assert_held(&i915->drm.struct_mutex);
if (!i915->gt.active_requests)
return;
- for_each_engine(engine, i915, id)
- engine_retire_requests(engine);
+ list_for_each_entry_safe(ring, next, &i915->gt.rings, link)
+ ring_retire_requests(ring);
[Continuing from previous discussion on try-bot.]
I had a thought that this could be managed more efficiently in a very
simple manner.
We rename timeline->inflight_seqnos to something like live_requests and
manage this per ctx timeline, from request_add to request_retire. At the
same time we only have ring->ring_link be linked to
i915->gt.(live_)rings while the ctx timeline live_requests count is
greater than zero. In other words list_add on 0->1, list_del on 1->0.
This way the retire path does not have to walk all known rings, but only
all rings with live (unretired) reqeusts.
What do you think?
I wouldn't resurrect inflight_seqnos for this, we can simply use
list_empty(ring->request_list). Slight icky feeling every time we do
anything under struct_mutex, but by this point I itch all over.
Right, that's even simpler.
Seems like a small enough patch to try after. I think I class it as an
optimisation, so would like to get the bigger change in engine to ring
soaking first.
Ok, as you wish.
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx