Quoting Mika Kuoppala (2019-06-14 15:10:08) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > index 1cbc3ef4fc27..5311286578b7 100644 > > --- a/drivers/gpu/drm/i915/i915_request.c > > +++ b/drivers/gpu/drm/i915/i915_request.c > > @@ -1444,7 +1444,15 @@ long i915_request_wait(struct i915_request *rq, > > return -ETIME; > > > > trace_i915_request_wait_begin(rq, flags); > > - lock_map_acquire(&rq->i915->gt.reset_lockmap); > > + > > + /* > > + * We must never wait on the GPU while holding a lock as we > > + * may need to perform a GPU reset. So while we don't need to > > + * serialise wait/reset with an explicit lock, we do want > > + * lockdep to detect potential dependency cycles. > > + */ > > + mutex_acquire(&rq->i915->gpu_error.wedge_mutex.dep_map, > > + 0, 0, _THIS_IP_); > > Seems to translate to exclusive lock with full checking. > > There was ofcourse a slight possibilty that previous reviewer did > read all the lockdep.h. Looked at the wedge mutex and connected > the dots. Well, it is obvious now. Hah, I had forgotten all about wedge_mutex :-p Hopefully, this keeps our reset handling robust. First I have to fix the mistakes I've recently made... I just need to find a reviewer for struct_mutex removal :) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx