Re: [PATCH v8 1/2] drm/i915: Refactor intel_can_enable_sagv

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2019-10-25 at 10:44 +0000, Lisovskiy, Stanislav wrote:
> On Fri, 2019-10-25 at 13:24 +0300, Ville Syrjälä wrote:
> > On Fri, Oct 25, 2019 at 12:53:51PM +0300, Stanislav Lisovskiy
> > wrote:
> > > Currently intel_can_enable_sagv function contains
> > > a mix of workarounds for different platforms
> > > some of them are not valid for gens >= 11 already,
> > > so lets split it into separate functions.
> > > 
> > > v2:
> > >     - Rework watermark calculation algorithm to
> > >       attempt to calculate Level 0 watermark
> > >       with added sagv block time latency and
> > >       check if it fits in DBuf in order to
> > >       determine if SAGV can be enabled already
> > >       at this stage, just as BSpec 49325 states.
> > >       if that fails rollback to usual Level 0
> > >       latency and disable SAGV.
> > >     - Remove unneeded tabs(James Ausmus)
> > > 
> > > v3: Rebased the patch
> > > 
> > > v4: - Added back interlaced check for Gen12 and
> > >       added separate function for TGL SAGV check
> > >       (thanks to James Ausmus for spotting)
> > >     - Removed unneeded gen check
> > >     - Extracted Gen12 SAGV decision making code
> > >       to a separate function from skl_compute_wm
> > > 
> > > Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@xxxxxxxxx
> > > >
> > > Cc: Ville Syrjälä <ville.syrjala@xxxxxxxxx>
> > > Cc: James Ausmus <james.ausmus@xxxxxxxxx>
> > > ---
> > >  .../drm/i915/display/intel_display_types.h    |   8 +
> > >  drivers/gpu/drm/i915/intel_pm.c               | 254
> > > +++++++++++++++++-
> > >  2 files changed, 254 insertions(+), 8 deletions(-)
> > > 
> > > 
> > 
> > > 
> > >  	
> > > +	/*
> > > +	 * Lets assume we can tolerate SAGV for now,
> > > +	 * until watermark calculations prove the opposite
> > > +	 * if any of the pipe planes in the state will
> > > +	 * fail the requirements it will be assigned to false
> > > +	 * in skl_compute_ddb.
> > > +	 */
> > > +	state->gen12_can_sagv = true;
> > > +
> > > +	for_each_oldnew_intel_crtc_in_state(state, crtc,
> > > old_crtc_state,
> > > +					    new_crtc_state, i) {
> > > +		ret = tgl_check_pipe_fits_sagv_wm(new_crtc_state, ddb);
> > > +		if (ret) {
> > > +			state->gen12_can_sagv = false;
> > > +			break;
> > 
> > +		}
> > 
> > This is not going to work. We need the infromation from _all_
> > pipes,
> > not
> > just the ones in the state. We probably want to make that can_sagv
> > thing
> > a bitmask of pipes so that we don't have to have all pipes in the
> > state.
> 
> But isn't it so that even if at least one plane/pipe can't tolerate
> SAGV, we can't enable it already? So what is the point of checking
> other planes/pipes then?
> Or may be I'm missing something here.

Ok, I think I get your point actually. As we don't have all pipes in
the state we might wrongly come to conclusion that we can enable SAGV
here. Also probably it really means that I will have to move
gen12_can_sagv from intel_atomic_state to our favourite dev_priv struct
as we will have to track all of the pipes in global state.

Regarding 3 different code paths from can_enable_sagv problem is that
in reality as I understand those are different, for example we disable
SAGV for SKL if there multiple active pipes, while for ICL we don't.

Also as we discussed ICL and TGL have completely different ways of
treating sagv_block_time(at least according to current BSpec) I could
unite all of those functions to one however it
would then contains multiple platform checks and stuff like that.

> 
> > 
> > > +	}
> > > +
> > > +	if (state->gen12_can_sagv) {
> > > +		/*
> > > +		 * If we determined that we can actually enable SAGV,
> > > then
> > > +		 * actually use those levels
> > > tgl_check_pipe_fits_sagv_wm
> > > +		 * has already taken care of checking if L0 + sagv
> > > block time
> > > +		 * fits into ddb.
> > > +		 */
> > > +		for_each_oldnew_intel_crtc_in_state(state, crtc,
> > > old_crtc_state,
> > > +					    new_crtc_state, i) {
> > > +			struct intel_plane *plane;
> > > +			for_each_intel_plane_on_crtc(&dev_priv->drm,
> > > crtc, plane) {
> > > +				enum plane_id plane_id = plane->id;
> > > +				struct skl_plane_wm *plane_wm = \
> > > +				    &new_crtc_state-
> > > > wm.skl.optimal.planes[plane_id];
> > > 
> > > +				struct skl_wm_level *sagv_wm0 =
> > > &plane_wm->sagv_wm_l0;
> > > +				struct skl_wm_level *l0_wm0 =
> > > &plane_wm->wm[0];
> > > +
> > > +				memcpy(l0_wm0, sagv_wm0, sizeof(struct
> > > skl_wm_level));
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +
> > >  static int
> > >  skl_compute_wm(struct intel_atomic_state *state)
> > >  {
> > > +	struct drm_device *dev = state->base.dev;
> > > +	const struct drm_i915_private *dev_priv = to_i915(dev);
> > >  	struct intel_crtc *crtc;
> > >  	struct intel_crtc_state *new_crtc_state;
> > >  	struct intel_crtc_state *old_crtc_state;
> > > @@ -5553,6 +5785,9 @@ skl_compute_wm(struct intel_atomic_state
> > > *state)
> > >  	/* Clear all dirty flags */
> > >  	results->dirty_pipes = 0;
> > >  
> > > +	/* If we exit before check is done */
> > > +	state->gen12_can_sagv = false;
> > > +
> > >  	ret = skl_ddb_add_affected_pipes(state);
> > >  	if (ret)
> > >  		return ret;
> > > @@ -5579,6 +5814,9 @@ skl_compute_wm(struct intel_atomic_state
> > > *state)
> > >  			results->dirty_pipes |= BIT(crtc->pipe);
> > >  	}
> > >  
> > > +	if (INTEL_GEN(dev_priv) >= 12)
> > > +		tgl_check_and_set_sagv(state);
> > > +
> > >  	ret = skl_compute_ddb(state);
> > >  	if (ret)
> > >  		return ret;
> > > -- 
> > > 2.17.1
> > 
> > 
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux