Quoting Tvrtko Ursulin (2019-09-19 11:48:25) > > On 18/09/2019 15:54, Chris Wilson wrote: > > As not only is the signal->timeline volatile, so will be acquiring the > > timeline's HWSP. We must first carefully acquire the timeline from the > > signaling request and then lock the timeline. With the removal of the > > struct_mutex serialisation of request construction, we can have multiple > > timelines active at once, and so we must avoid using the nested mutex > > lock as it is quite possible for both timelines to be establishing > > semaphores on the other and so deadlock. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/gt/intel_timeline.c | 32 ++++++++++++++++++------ > > 1 file changed, 25 insertions(+), 7 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c > > index 2ce81bdf7f86..7b1d4098dd2e 100644 > > --- a/drivers/gpu/drm/i915/gt/intel_timeline.c > > +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c > > @@ -500,17 +500,32 @@ int intel_timeline_read_hwsp(struct i915_request *from, > > struct i915_request *to, > > u32 *hwsp) > > { > > - struct intel_timeline_cacheline *cl = from->hwsp_cacheline; > > - struct intel_timeline *tl = from->timeline; > > I think this should have been annotated in 2/4. I left it un-annotated so that it would be left with a warning, as I felt this change was substantial enough to merit its own patch. > > + struct intel_timeline *tl; > > int err; > > > > + rcu_read_lock(); > > + tl = rcu_dereference(from->timeline); > > + if (i915_request_completed(from) || !kref_get_unless_zero(&tl->kref)) > > + tl = NULL; > > + rcu_read_unlock(); > > + if (!tl) /* already completed */ > > + return 1; > > + > > GEM_BUG_ON(rcu_access_pointer(to->timeline) == tl); > > > > - mutex_lock_nested(&tl->mutex, SINGLE_DEPTH_NESTING); > > - err = i915_request_completed(from); > > - if (!err) > > + err = -EBUSY; > > + if (mutex_trylock(&tl->mutex)) { > > + struct intel_timeline_cacheline *cl = from->hwsp_cacheline; > > + > > + if (i915_request_completed(from)) { > > + err = 1; > > + goto unlock; > > + } > > + > > err = cacheline_ref(cl, to); > > - if (!err) { > > + if (err) > > + goto unlock; > > + > > if (likely(cl == tl->hwsp_cacheline)) { > > *hwsp = tl->hwsp_offset; > > } else { /* across a seqno wrap, recover the original offset */ > > @@ -518,8 +533,11 @@ int intel_timeline_read_hwsp(struct i915_request *from, > > ptr_unmask_bits(cl->vaddr, CACHELINE_BITS) * > > CACHELINE_BYTES; > > } > > + > > +unlock: > > + mutex_unlock(&tl->mutex); > > } > > - mutex_unlock(&tl->mutex); > > + intel_timeline_put(tl); > > > > return err; > > } > > > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Thanks, -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx