Re: [PATCH] drm/i915/guc: Refcount context during error capture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 13, 2021 at 02:17:42PM -0700, Matthew Brost wrote:
> On Mon, Sep 13, 2021 at 02:10:16PM -0700, John.C.Harrison@xxxxxxxxx wrote:
> > From: John Harrison <John.C.Harrison@xxxxxxxxx>
> > 
> > When i915 receives a context reset notification from GuC, it triggers
> > an error capture before resetting any outstanding requsts of that
> > context. Unfortunately, the error capture is not a time bound
> > operation. In certain situations it can take a long time, particularly
> > when multiple large LMEM buffers must be read back and eoncoded. If
> > this delay is longer than other timeouts (heartbeat, test recovery,
> > etc.) then a full GT reset can be triggered in the middle.
> > 
> > That can result in the context being reset by GuC actually being
> > destroyed before the error capture completes and the GuC submission
> > code resumes. Thus, the GuC side can start dereferencing stale
> > pointers and Bad Things ensue.
> > 
> > So add a refcount get of the context during the entire reset
> > operation. That way, the context can't be destroyed part way through
> > no matter what other resets or user interactions occur.
> > 
> > Signed-off-by: John Harrison <John.C.Harrison@xxxxxxxxx>
> > ---
> >  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index c7a41802b448..7291fd8f68a6 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -2920,6 +2920,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  {
> >  	struct intel_context *ce;
> >  	int desc_idx;
> > +	unsigned long flags;
> >  
> >  	if (unlikely(len != 1)) {
> >  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> > @@ -2927,11 +2928,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  	}
> >  
> >  	desc_idx = msg[0];
> > +
> > +	/*
> > +	 * The context lookup uses the xarray but lookups only require an RCU lock
> > +	 * not the full spinlock. So take the lock explicitly and keep it until the
> > +	 * context has been reference count locked to ensure it can't be destroyed
> > +	 * asynchronously until the reset is done.
> > +	 */
> > +	xa_lock_irqsave(&guc->context_lookup, flags);
> >  	ce = g2h_context_lookup(guc, desc_idx);
> > +	if (ce)
> > +		intel_context_get(ce);
> > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > +
> >  	if (unlikely(!ce))
> >  		return -EPROTO;
> >  
> >  	guc_handle_context_reset(guc, ce);
> > +	intel_context_put(ce);
> 
> So this is going to directly conflict with a patch that I'm about to
> post as I'm going to change the error capture to async operation. In
> that case the intel_context_put would need to be done once that op
> completes. I'll likely pull this patch into that series. I'd expect it
> to be posted by the end of the day.

tbh this entire thing is looking very scary. Somehow we can race with
other processing while we try to handle a reset. That's fragile at best.

Proper fix is to exclude these kind of problems by design, by either
guaranteeing that no concurrent dequeuing of guc2host message can happen,
or by holding appropriate locks, or by keeping track of anything pending
in a more controlled way (something like expected g2h messages as separate
structs, instead of the current spaghetti layering violation chaos we have
for processing g2h message).

We maybe should use a few of these things when we're going through the
locking engineering training with guc team.
-Daniel

> 
> Matt 
> 
> >  
> >  	return 0;
> >  }
> > -- 
> > 2.25.1
> > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux