Quoting Joonas Lahtinen (2017-11-08 15:09:47) > On Tue, 2017-11-07 at 22:06 +0000, Chris Wilson wrote: > > The shared fence array is not autopruning and may continue to grow as an > > object is shared between new timelines. Take the opportunity when we > > think the object is idle (we have to confirm that any external fence is > > also signaled) to decouple all the fences. > > > > We apply a similar trick after waiting on an object, see commit > > e54ca9774777 ("drm/i915: Remove completed fences after a wait") > > > > v2: No longer need to handle the batch pool as a special case. > > v3: Need to trylock from within i915_vma_retire as this may be called > > form the shrinker - and we may later try to allocate underneath the > > reservation lock, so a deadlock is possible. > > > > References: https://bugs.freedesktop.org/show_bug.cgi?id=102936 > > Fixes: d07f0e59b2c7 ("drm/i915: Move GEM activity tracking into a common struct reservation_object") > > Fixes: 80b204bce8f2 ("drm/i915: Enable multiple timelines") > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > > <SNIP> > > > +++ b/drivers/gpu/drm/i915/i915_vma.c > > @@ -54,6 +54,13 @@ i915_vma_retire(struct i915_gem_active *active, > > if (--obj->active_count) > > return; > > > > + /* Prune the shared fence arrays iff completely idle (inc. external) */ > > + if (reservation_object_trylock(obj->resv)) { > > + if (reservation_object_test_signaled_rcu(obj->resv, true)) > > + reservation_object_add_excl_fence(obj->resv, NULL); > > + reservation_object_unlock(obj->resv); > > + } > > Feels bit like this could also be a feature of reservation objects. Yeah, we shouldn't need it so badly when the "don't keep signaled fences in the resv.object" lands. Until then, it's quite easy to tie up large chunks of kernel memory via stale fences, e.g. gem_ctx_thrash. When the improvement to resv.object lands, it will still be wise to keep this around to free the residual fences -- it just won't be as large as an impact! -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx