On Thu, 25 Jul 2019 at 19:24, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > Currently we use the engine->active.lock to ensure that the request is > not retired as we capture the data. However, we only need to ensure that > the vma are not removed prior to use acquiring their contents, and > since we have already relinquished our stop-machine protection, we > assume that the user will not be overwriting the contents before we are > able to record them. > > In order to capture the vma outside of the spinlock, we acquire a > reference and mark the vma as active to prevent it from being unbound. > However, since it is tricky allocate an entry in the fence tree (doing > so would require taking a mutex) while inside the engine spinlock, we > use an atomic bit and special case the handling for i915_active_wait. > > The core benefit is that we can use some non-atomic methods for mapping > the device pages, we can remove the slow compression phase out of atomic > context (i.e. stop antagonising the nmi-watchdog), and no we longer need > large reserves of atomic pages. > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111215 > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > --- > drivers/gpu/drm/i915/i915_active.c | 34 ++++++- > drivers/gpu/drm/i915/i915_active.h | 3 + > drivers/gpu/drm/i915/i915_active_types.h | 3 + > drivers/gpu/drm/i915/i915_gpu_error.c | 113 ++++++++++++++++------- > 4 files changed, 118 insertions(+), 35 deletions(-) <snip> > > static struct drm_i915_error_object * > @@ -1370,6 +1399,7 @@ gem_record_rings(struct i915_gpu_state *error, struct compress *compress) > struct intel_engine_cs *engine = i915->engine[i]; > struct drm_i915_error_engine *ee = &error->engine[i]; > struct i915_request *request; > + struct capture_vma *capture; Not even setting capture = NULL? Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx> _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx