Re: [PATCH] drm/i915: Improve PSR activation timing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, 2018-02-08 at 17:01 -0800, Andy Lutomirski wrote:
> 
> 
> > On Feb 8, 2018, at 4:39 PM, Pandiyan, Dhinakaran <dhinakaran.pandiyan@xxxxxxxxx> wrote:
> > 
> > 
> >> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
> >> Hi Andy,
> >> 
> >> thanks for getting involved with PSR and sorry for not replying sooner.
> >> 
> >> I first saw this patch on that bugzilla entry but only now I stop to
> >> really think why I have written the code that way.
> >> 
> >> So some clarity below.
> >> 
> >>> On Mon, Feb 05, 2018 at 10:07:09PM +0000, Andy Lutomirski wrote:
> >>> The current PSR code has a two call sites that each schedule delayed
> >>> work to activate PSR.  As far as I can tell, each call site intends
> >>> to keep PSR inactive for the given amount of time and then allow it
> >>> to be activated.
> >>> 
> >>> The call sites are:
> >>> 
> >>> - intel_psr_enable(), which explicitly states in a comment that
> >>>   it's trying to keep PSR off a short time after the dispay is
> >>>   initialized as a workaround.
> >> 
> >> First of all I really want to kill this call here and remove the
> >> FIXME. It was an ugly hack that I added to solve a corner case
> >> that was leaving me with blank screens when activating so sooner.
> >> 
> >>> 
> >>> - intel_psr_flush().  There isn't an explcit explanation, but the
> >>>   intent is presumably to keep PSR off until the display has been
> >>>   idle for 100ms.
> >> 
> >> The reason for 100 is kind of ugly-nonsense-empirical value
> >> I concluded from VLV/CHV experience.
> >> On platforms with HW tracking HW waits few identical frames
> >> until really activating PSR. VLV/CHV activation is immediate.
> >> But HW is also different and there it seemed that hw needed a
> >> few more time before starting the transitions.
> >> Furthermore I didn't want to add that so quickly because I didn't
> >> want to take the risk of killing battery with software tracking
> >> when doing transitions so quickly using software tracking.
> >> 
> >>> 
> >>> The current code doesn't actually accomplish either of these goals.
> >>> Rather than keeping PSR inactive for the given amount of time, it
> >>> will schedule PSR for activation after the given time, with the
> >>> earliest target time in such a request winning.
> >> 
> >> Putting that way I was asking myself how that hack had ever fixed
> >> my issue. Because the way you explained here seems obvious that it
> >> wouldn't ever fix my bug or any other.
> >> 
> >> So I applied your patch and it made even more sense (without considering
> >> the fact I want to kill the first call anyways).
> >> 
> >> So I came back, removed your patch and tried to understand how did
> >> it ever worked.
> >> 
> >> So, the thing is that intel_psr_flush will never be really executed
> >> if intel_psr_enable wasn't executed. That is guaranteed by:
> >> 
> >> mutex_lock(&dev_priv->psr.lock);
> >>    if (!dev_priv->psr.enabled) {
> >> 
> >> So, intel_psr_enable will be for sure the first one to schedule the
> >> work delayed to the ugly higher delay.
> >> 
> >>> 
> >>> In other words, if intel_psr_enable() is immediately followed by
> >>> intel_psr_flush(), then PSR will be activated after 100ms even if
> >>> intel_psr_enable() wanted a longer delay.  And, if the screen is
> >>> being constantly updated so that intel_psr_flush() is called once
> >>> per frame at 60Hz, PSR will still be activated once every 100ms.
> >> 
> >> During this time you are right, many calls of intel_psr_exit
> >> coming from flush functions can be called... But none of
> >> them will schedule the work with 100 delay.
> >> 
> >> they will skip on
> >> if (!work_busy(&dev_priv->psr.work.work))
> 
> As below, the first call will.  Then, 100ms later, the work will fire.  Then the next flush will schedule it again, etc.
> 
> > 
> > Wouldn't work_busy() return false until the work is actually queued
> > which is 100ms after calling schedule_delayed_work()?
> > 
> > For e.g, flushes at 0, 16, 32...96 will have work_busy() returning false
> > until 100ms.
> > 
> > The first psr_work will end up getting scheduled at 100ms, which I
> > believe is not what we want. 
> 
> Indeed.  I stuck some printks in and this seems to be what happens.
> 
> > 
> > 
> > However, I think 
> > 
> >    if (dev_priv->psr.busy_frontbuffer_bits)
> >        goto unlock;
> > 
> >    intel_psr_activate(intel_dp);
> > 
> > in psr_work might prevent activate being called at 100ms if an
> > invalidate happened to be called before that.
> > 
> 
> On my system, invalidate is never called.
I noticed the same in console mode with fbdev, there are no invalidates.

>   Even if it were called, that check would only help if we got lucky and the work fired after invalidate and before the subsequent flush.
> 
Agreed. This dependency on invalidate sure looks wrong. 

> > 
> > 
> > 
> >> 
> >> So, the higher delayed *hack* will be respected and PSR won't get
> >> activated before that.
> >> 
> >> On the other hand you might ask what if,
> >> for some strange reason,
> >> (intel_dp->panel_power_cycle_delay * 5) is lesser than 100.
> >> Well, on this case this would be ok, because it happens only
> >> once and only on gen > 9 where hw tracking will wait the minimal
> >> number of frames before the actual transition to PSR.
> >> 
> >> In either cases I believe we are safe.
> >> 
> >> Thanks,
> >> Rodrigo.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux