Re: [PATCH 19/19] drm/i915: power domains: add vlv power wells

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 26 Feb 2014 20:02:19 +0200
Imre Deak <imre.deak@xxxxxxxxx> wrote:

> On Thu, 2014-02-20 at 11:58 -0800, Jesse Barnes wrote:
> > On Wed, 19 Feb 2014 14:29:44 +0200
> > Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> wrote:
> > 
> > > On Tue, Feb 18, 2014 at 12:02:20AM +0200, Imre Deak wrote:
> > > > Based on an early draft from Jesse.
> > > > 
> > > > Add support for powering on/off the dynamic power wells on VLV by
> > > > registering its display and dpio dynamic power wells with the power
> > > > domain framework.
> > > > 
> > > > For now power on all PHY TX lanes regardless of the actual lane
> > > > configuration. Later this can be optimized when the PHY side setup
> > > > enables only the required lanes. Atm, it enables all lanes in all
> > > > cases.
> > > > 
> > > > Signed-off-by: Imre Deak <imre.deak@xxxxxxxxx>
> > > > ---
> > > >  drivers/gpu/drm/i915/i915_dma.c      |   1 -
> > > >  drivers/gpu/drm/i915/i915_drv.h      |   2 +-
> > > >  drivers/gpu/drm/i915/intel_display.c |   1 +
> > > >  drivers/gpu/drm/i915/intel_pm.c      | 228 +++++++++++++++++++++++++++++++++++
> > > >  4 files changed, 230 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> > > > index dca4dc3..f8f7a59 100644
> > > > --- a/drivers/gpu/drm/i915/i915_dma.c
> > > > +++ b/drivers/gpu/drm/i915/i915_dma.c
> > > > @@ -1668,7 +1668,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
> > > >  		goto out_mtrrfree;
> > > >  	}
> > > >  
> > > > -	dev_priv->display_irqs_enabled = true;
> > > >  	intel_irq_init(dev);
> > > >  	intel_uncore_sanitize(dev);
> > > >  
> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > > > index 227c349..804334e 100644
> > > > --- a/drivers/gpu/drm/i915/i915_drv.h
> > > > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > > > @@ -1053,7 +1053,7 @@ struct i915_power_well {
> > > >  	/* power well enable/disable usage count */
> > > >  	int count;
> > > >  	unsigned long domains;
> > > > -	void *data;
> > > > +	unsigned long data;
> > > >  	const struct i915_power_well_ops *ops;
> > > >  };
> > > >  
> > > > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > > > index ea00878..d6661c4 100644
> > > > --- a/drivers/gpu/drm/i915/intel_display.c
> > > > +++ b/drivers/gpu/drm/i915/intel_display.c
> > > > @@ -4224,6 +4224,7 @@ static void valleyview_modeset_global_resources(struct drm_device *dev)
> > > >  
> > > >  	if (req_cdclk != cur_cdclk)
> > > >  		valleyview_set_cdclk(dev, req_cdclk);
> > > > +	modeset_update_power_wells(dev);
> > > >  }
> > > >  
> > > >  static void valleyview_crtc_enable(struct drm_crtc *crtc)
> > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > index 68f58e5..e4416a7 100644
> > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > @@ -5344,6 +5344,133 @@ static void hsw_power_well_disable(struct drm_i915_private *dev_priv,
> > > >  	hsw_enable_package_c8(dev_priv);
> > > >  }
> > > >  
> > > > +static void vlv_set_power_well(struct drm_i915_private *dev_priv,
> > > > +			       struct i915_power_well *power_well, bool enable)
> > > > +{
> > > > +	enum punit_power_well power_well_id = power_well->data;
> > > > +	u32 mask;
> > > > +	u32 state;
> > > > +	u32 ctrl;
> > > > +
> > > > +	mask = PUNIT_PWRGT_MASK(power_well_id);
> > > > +	state = enable ? PUNIT_PWRGT_PWR_ON(power_well_id) :
> > > > +			 PUNIT_PWRGT_PWR_GATE(power_well_id);
> > > > +
> > > > +	mutex_lock(&dev_priv->rps.hw_lock);
> > > > +
> > > > +#define COND \
> > > > +	((vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_STATUS) & mask) == state)
> > > > +
> > > > +	if (COND)
> > > > +		goto out;
> > > > +
> > > > +	ctrl = vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_CTRL);
> > > > +	ctrl &= ~mask;
> > > > +	ctrl |= state;
> > > > +	vlv_punit_write(dev_priv, PUNIT_REG_PWRGT_CTRL, ctrl);
> > > > +
> > > > +	if (wait_for(COND, 100))
> > > > +		DRM_ERROR("timout setting power well state %08x (%08x)\n",
> > > > +			  state,
> > > > +			  vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_CTRL));
> > > 
> > > #undef COND somewhere to avoid suprises further down in the code?
> > > 
> > > > +
> > > > +out:
> > > > +	mutex_unlock(&dev_priv->rps.hw_lock);
> > > > +}
> > > > +
> > > <snip>
> > > 
> > 
> > I'd like to see the code for re-enabling the display state land
> > eventually too, so we can get savings when userspace uses DPMS instead
> > of NULL mode sets for things.  But to do that nicely requires some more
> > work in the mode set code to pull out more PLL bits (also needed for
> > atomic mode setting).
> 
> I guess you meant here the drm connector->funcs->dpms() callback, that's
> called when setting the connector's dpms property. But I'm not sure what
> you meant by re-enabling the display state.
> 
> I was thinking that we need to get/put the power domains around setting
> DPMS off/on via the above property, but we actually don't need that.
> Internally the dpms callback just calls the
> display.disable_crtc/enable_crtc() which will disable/enable the pipes
> but won't do anything with the power domains (which is only updated
> during a normal modeset through display.modeset_global_resources().
> 
> This isn't optimal power-wise, but it's a separate issue. I think Daniel
> had the idea of converting the dpms callback to be a proper NULL
> modeset. In that case too the power domains will be get/put correctly.

Right, that's what I'm getting at.  Today when someone uses the DPMS
prop, we won't go through the mode set path and thus won't do any power
well toggles.  I think that's a bug we should fix.  Either by making
DPMS into a NULL mode set, or pulling out more bits from our mode set
sequence into our crtc enable/disable paths.

My earlier code did this somewhat differently, but it did cover DPMS.

-- 
Jesse Barnes, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux