Re: [PATCH] drm/i915: Fix dbuf slice mask when turning off all the pipes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 18, 2020 at 09:33:29AM +0300, Ville Syrjälä wrote:
> On Sun, May 17, 2020 at 03:12:49PM +0300, Lisovskiy, Stanislav wrote:
> > On Sat, May 16, 2020 at 07:15:42PM +0300, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx>
> > > 
> > > The current dbuf slice computation only happens when there are
> > > active pipes. If we are turning off all the pipes we just leave
> > > the dbuf slice mask at it's previous value, which may be something
> > > other that BIT(S1). If runtime PM will kick in it will however
> > > turn off everything but S1. Then on the next atomic commit (if
> > > the new dbuf slice mask matches the stale value we left behind)
> > > the code will not turn on the other slices we now need. This will
> > > lead to underruns as the planes are trying to use a dbuf slice
> > > that's not powered up.
> > > 
> > > To work around let's just just explicitly set the dbuf slice mask
> > > to BIT(S1) when we are turning off all the pipes. Really the code
> > > should just calculate this stuff the same way regardless whether
> > > the pipes are on or off, but we're not quite there yet (need a
> > > bit more work on the dbuf state for that).
> > > 
> > > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@xxxxxxxxx>
> > > Fixes: 3cf43cdc63fb ("drm/i915: Introduce proper dbuf state")
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx>
> > > ---
> > >  drivers/gpu/drm/i915/intel_pm.c | 16 ++++++++++++++++
> > >  1 file changed, 16 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > index a21e36ed1a77..4a523d8b881f 100644
> > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > @@ -4071,6 +4071,22 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
> > >  	*num_active = hweight8(active_pipes);
> > >  
> > >  	if (!crtc_state->hw.active) {
> > > +		/*
> > > +		 * FIXME hack to make sure we compute this sensibly when
> > > +		 * turning off all the pipes. Otherwise we leave it at
> > > +		 * whatever we had previously, and then runtime PM will
> > > +		 * mess it up by turning off all but S1. Remove this
> > > +		 * once the dbuf state computation flow becomes sane.
> > > +		 */
> > > +		if (active_pipes == 0) {
> > > +			new_dbuf_state->enabled_slices = BIT(DBUF_S1);
> > > +
> > > +			if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
> > > +				ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
> > > +				if (ret)
> > > +					return ret;
> > > +			}
> > > +		}
> > 
> > Rather weird, why we didnt have that issue before..
> > Just trying to figure out what's the reason - aren't we recovering the last
> > state of enabled slices from hw in gen9_dbuf_enable?
> > 
> > As I understand you modify enabled_slices in dbuf global object recovering
> > the actual hw state there. 
> > 
> > Also from your patches I don't see the actual logic difference with what 
> > was happening before dbuf_state in that sense.
> > I.e we were also bailing out in skl_get_pipe_alloc_limits, without modifying
> > dbuf_state before, however there was no issue.
> 
> We didn't have the old state so the pre/post update hooks were comparing
> the new value against the value that was mangled by the display core init
> to match the actual hw state.
> 
> The reason why it bit tgl so hard is that we tend to use two slices
> on tgl all the time, whereas on icl we use just the first slice most
> of the time.

Ah yep, so previously we were comparing it against value fetched from hw
right away and now we compare aginst previous dbuf_state.

However I agree that of course we should modify the new dbuf state properly
in case if active_pipe == 0 however the only thing I would vote for is
doing all enabled_slices assignment in a same place, using that table
magic func.

Stan
> 
> -- 
> Ville Syrjälä
> Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux