From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> In execlists mode internal house keeping of the discarded requests (and so contexts and VMAs) relies solely on the retire worker, which can be prevented from running by just being unlucky when busy clients are hammering on the big lock. Prime example is the gem_close_race IGT, which due to this effect causes internal lists to grow to epic proportions, with a consequece of object VMA traversal to growing exponentially and resulting in tens of minutes test runtime. Memory use is also very high and a limiting factor on some platforms. Since we do not want to run this internal house keeping more frequently, due concerns that it may affect performance, and the scenario being statistically not very likely in real workloads, one possible workaround is to run it when new client handles are opened. This will solve the issues with this particular test case, making it complete in tens of seconds instead of tens of minutes, and will not add any run-time penalty to running clients. It can only slightly slow down new client startup, but on a realisticaly loaded system we are expecting this to be not significant. Even with heavy rendering in progress we can have perhaps up to several thousands of requests pending retirement, which, with a typical retirement cost of 80ns to 1us per request, is not significant. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Testcase: igt/gem_close_race/gem-close-race Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index d46a0462c765..f02991d28048 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -5162,6 +5162,10 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file) if (ret) kfree(file_priv); + mutex_lock(&dev->struct_mutex); + i915_gem_retire_requests(dev); + mutex_unlock(&dev->struct_mutex); + return ret; } -- 1.9.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx