On Wed, 19 Feb 2014 14:29:44 +0200 Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> wrote: > On Tue, Feb 18, 2014 at 12:02:20AM +0200, Imre Deak wrote: > > Based on an early draft from Jesse. > > > > Add support for powering on/off the dynamic power wells on VLV by > > registering its display and dpio dynamic power wells with the power > > domain framework. > > > > For now power on all PHY TX lanes regardless of the actual lane > > configuration. Later this can be optimized when the PHY side setup > > enables only the required lanes. Atm, it enables all lanes in all > > cases. > > > > Signed-off-by: Imre Deak <imre.deak@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_dma.c | 1 - > > drivers/gpu/drm/i915/i915_drv.h | 2 +- > > drivers/gpu/drm/i915/intel_display.c | 1 + > > drivers/gpu/drm/i915/intel_pm.c | 228 +++++++++++++++++++++++++++++++++++ > > 4 files changed, 230 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c > > index dca4dc3..f8f7a59 100644 > > --- a/drivers/gpu/drm/i915/i915_dma.c > > +++ b/drivers/gpu/drm/i915/i915_dma.c > > @@ -1668,7 +1668,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) > > goto out_mtrrfree; > > } > > > > - dev_priv->display_irqs_enabled = true; > > intel_irq_init(dev); > > intel_uncore_sanitize(dev); > > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > > index 227c349..804334e 100644 > > --- a/drivers/gpu/drm/i915/i915_drv.h > > +++ b/drivers/gpu/drm/i915/i915_drv.h > > @@ -1053,7 +1053,7 @@ struct i915_power_well { > > /* power well enable/disable usage count */ > > int count; > > unsigned long domains; > > - void *data; > > + unsigned long data; > > const struct i915_power_well_ops *ops; > > }; > > > > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c > > index ea00878..d6661c4 100644 > > --- a/drivers/gpu/drm/i915/intel_display.c > > +++ b/drivers/gpu/drm/i915/intel_display.c > > @@ -4224,6 +4224,7 @@ static void valleyview_modeset_global_resources(struct drm_device *dev) > > > > if (req_cdclk != cur_cdclk) > > valleyview_set_cdclk(dev, req_cdclk); > > + modeset_update_power_wells(dev); > > } > > > > static void valleyview_crtc_enable(struct drm_crtc *crtc) > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c > > index 68f58e5..e4416a7 100644 > > --- a/drivers/gpu/drm/i915/intel_pm.c > > +++ b/drivers/gpu/drm/i915/intel_pm.c > > @@ -5344,6 +5344,133 @@ static void hsw_power_well_disable(struct drm_i915_private *dev_priv, > > hsw_enable_package_c8(dev_priv); > > } > > > > +static void vlv_set_power_well(struct drm_i915_private *dev_priv, > > + struct i915_power_well *power_well, bool enable) > > +{ > > + enum punit_power_well power_well_id = power_well->data; > > + u32 mask; > > + u32 state; > > + u32 ctrl; > > + > > + mask = PUNIT_PWRGT_MASK(power_well_id); > > + state = enable ? PUNIT_PWRGT_PWR_ON(power_well_id) : > > + PUNIT_PWRGT_PWR_GATE(power_well_id); > > + > > + mutex_lock(&dev_priv->rps.hw_lock); > > + > > +#define COND \ > > + ((vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_STATUS) & mask) == state) > > + > > + if (COND) > > + goto out; > > + > > + ctrl = vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_CTRL); > > + ctrl &= ~mask; > > + ctrl |= state; > > + vlv_punit_write(dev_priv, PUNIT_REG_PWRGT_CTRL, ctrl); > > + > > + if (wait_for(COND, 100)) > > + DRM_ERROR("timout setting power well state %08x (%08x)\n", > > + state, > > + vlv_punit_read(dev_priv, PUNIT_REG_PWRGT_CTRL)); > > #undef COND somewhere to avoid suprises further down in the code? > > > + > > +out: > > + mutex_unlock(&dev_priv->rps.hw_lock); > > +} > > + > <snip> > I'd like to see the code for re-enabling the display state land eventually too, so we can get savings when userspace uses DPMS instead of NULL mode sets for things. But to do that nicely requires some more work in the mode set code to pull out more PLL bits (also needed for atomic mode setting). -- Jesse Barnes, Intel Open Source Technology Center _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx