Re: [PATCH v2 13/14] [DO NOT MERGE] arm64: dts: allwinner: h6: Add GPU OPP table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Clement,

On Mon, Aug 03, 2020 at 09:54:05AM +0200, Clément Péron wrote:
> Hi Maxime and All,
> 
> On Sat, 4 Jul 2020 at 16:56, Clément Péron <peron.clem@xxxxxxxxx> wrote:
> >
> > Hi Maxime,
> >
> > On Sat, 4 Jul 2020 at 14:13, Maxime Ripard <maxime@xxxxxxxxxx> wrote:
> > >
> > > Hi,
> > >
> > > On Sat, Jul 04, 2020 at 12:25:34PM +0200, Clément Péron wrote:
> > > > Add an Operating Performance Points table for the GPU to
> > > > enable Dynamic Voltage & Frequency Scaling on the H6.
> > > >
> > > > The voltage range is set with minival voltage set to the target
> > > > and the maximal voltage set to 1.2V. This allow DVFS framework to
> > > > work properly on board with fixed regulator.
> > > >
> > > > Signed-off-by: Clément Péron <peron.clem@xxxxxxxxx>
> > >
> > > That patch seems reasonable, why shouldn't we merge it?
> >
> > I didn't test it a lot and last time I did, some frequencies looked unstable.
> > https://lore.kernel.org/patchwork/cover/1239739/
> >
> > This series adds regulator support to Panfrost devfreq, I will send a
> > new one if DVFS on the H6 GPU is stable.
> >
> > I got this running glmark2 last time
> > # glmark2-es2-drm
> > =======================================================
> >     glmark2 2017.07
> > =======================================================
> >     OpenGL Information
> >     GL_VENDOR:     Panfrost
> >     GL_RENDERER:   Mali T720 (Panfrost)
> >     GL_VERSION:    OpenGL ES 2.0 Mesa 20.0.5
> > =======================================================
> >
> > [   93.550063] panfrost 1800000.gpu: GPU Fault 0x00000088 (UNKNOWN) at
> > 0x0000000080117100
> > [   94.045401] panfrost 1800000.gpu: gpu sched timeout, js=0,
> > config=0x3700, status=0x8, head=0x21d6c00, tail=0x21d6c00,
> > sched_job=00000000e3c2132f
> >
> > [  328.871070] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> > 0x0000000000000000
> > [  328.871070] Reason: TODO
> > [  328.871070] raw fault status: 0xAA0003C2
> > [  328.871070] decoded fault status: SLAVE FAULT
> > [  328.871070] exception type 0xC2: TRANSLATION_FAULT_LEVEL2
> > [  328.871070] access type 0x3: WRITE
> > [  328.871070] source id 0xAA00
> > [  329.373327] panfrost 1800000.gpu: gpu sched timeout, js=1,
> > config=0x3700, status=0x8, head=0xa1a4900, tail=0xa1a4900,
> > sched_job=000000007ac31097
> > [  329.386527] panfrost 1800000.gpu: js fault, js=0,
> > status=DATA_INVALID_FAULT, head=0xa1a4c00, tail=0xa1a4c00
> > [  329.396293] panfrost 1800000.gpu: gpu sched timeout, js=0,
> > config=0x3700, status=0x58, head=0xa1a4c00, tail=0xa1a4c00,
> > sched_job=0000000004c90381
> > [  329.411521] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> > 0x0000000000000000
> > [  329.411521] Reason: TODO
> > [  329.411521] raw fault status: 0xAA0003C2
> > [  329.411521] decoded fault status: SLAVE FAULT
> > [  329.411521] exception type 0xC2: TRANSLATION_FAULT_LEVEL2
> > [  329.411521] access type 0x3: WRITE
> > [  329.411521] source id 0xAA00
> 
> Just to keep a track of this issue.
> 
> Piotr Oniszczuk give more test and seems to be software related:
> https://www.spinics.net/lists/dri-devel/msg264279.html
> 
> Ondrej gave a great explanation about a possible origin of this issue:
> https://freenode.irclog.whitequark.org/linux-sunxi/2020-07-11
> 
> 20:12 <megi> looks like gpu pll on H6 is NKMP clock, and those are
> implemented in such a way in mainline that they are prone to
> overshooting the frequency during output divider reduction
> 20:13 <megi> so disabling P divider may help
> 20:13 <megi> or fixing the dividers
> 20:14 <megi> and just allowing N to change
> 20:22 <megi> hmm, I haven't looked at this for quite some time, but H6
> BSP way of setting PLL factors actually makes the most sense out of
> everything I've seen/tested so far
> 20:23 <megi> it waits for lock not after setting NK factors, but after
> reducing the M factor (pre-divider)
> 20:24 <megi> I might as well re-run my CPU PLL tester with this
> algorithm, to see if it fixes the lockups
> 20:26 <megi> it makes sense to wait for PLL to stabilize "after"
> changing all the factors that actually affect the VCO, and not just
> some of them
> 20:27 <megi> warpme_: ^
> 20:28 <megi> it may be the same thing that plagues the CPU PLL rate
> changes at runtime

I guess it's one of the bugs we never heard of...

It would be a good idea to test it on another platform (like Rockchip?)
to rule out any driver issue?

What do you think?

Maxime

Attachment: signature.asc
Description: PGP signature

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux