On 22-07-28, Marco Felsch wrote: > On 22-07-28, Liu Ying wrote: > > On Wed, 2022-07-27 at 05:56 +0200, Marco Felsch wrote: > > > Hi Marek, Liu, > > > > > > On 22-07-26, Liu Ying wrote: > > > > On Tue, 2022-07-26 at 16:19 +0200, Marek Vasut wrote: > > > > > On 7/26/22 11:43, Marco Felsch wrote: > > > > > > FIFO underruns are seen if a AXI bus master with a higher > > > > > > priority > > > > > > do a > > > > > > lot of memory access. Increase the burst size to 256B to avoid > > > > > > such > > > > > > underruns and to improve the memory access efficiency. > > > > > > > > > > Sigh, this again ... > > > > > > I know.. we also tried the PANIC mode but this somehow didn't worked > > > as > > > documented. So this was the only way to reduce the underruns without > > > adapting the interconnect prio for the hdmi-lcdif. > > > > > > > > > diff --git a/drivers/gpu/drm/mxsfb/lcdif_kms.c > > > > > > b/drivers/gpu/drm/mxsfb/lcdif_kms.c > > > > > > index 1bec1279c8b5..1f22ea5896d5 100644 > > > > > > --- a/drivers/gpu/drm/mxsfb/lcdif_kms.c > > > > > > +++ b/drivers/gpu/drm/mxsfb/lcdif_kms.c > > > > > > @@ -143,8 +143,20 @@ static void lcdif_set_mode(struct > > > > > > lcdif_drm_private *lcdif, u32 bus_flags) > > > > > > CTRLDESCL0_1_WIDTH(m->crtc_hdisplay), > > > > > > lcdif->base + LCDC_V8_CTRLDESCL0_1); > > > > > > > > > > > > - writel(CTRLDESCL0_3_PITCH(lcdif->crtc.primary->state- > > > > > > >fb- > > > > > > > pitches[0]), > > > > > > > > > > > > - lcdif->base + LCDC_V8_CTRLDESCL0_3); > > > > > > + /* > > > > > > + * Undocumented P_SIZE and T_SIZE bit fields but > > > > > > according the > > > > > > + * downstream kernel they control the AXI burst size. > > > > > > As of now > > > > > > there > > > > > > > > I'm not sure if it is AXI burst size or any other burst size, > > > > though it > > > > seems to be AXI burst size. > > > > > > > > Cc'ing Jian who mentioned 'burst size' and changed it from 128B to > > > > 256B > > > > in the downstream kernel. > > > > > > Thanks. > > > > Jian told me that it's AXI burst size. > > Thanks for asking him. Do you know anything about the PANIC mode? We > tested it by: > - using the interconnect patchsets [1] > - added a patch for configuring the hdmi-lcdif interconnect via DT [not > send upstream yet] > - setting the PANIC threshold to thresh-low:1/2 and tresh-high:3/4 and > enabled the INT_ENABLE_D1_PLANE_PANIC_EN within > LCDC_V8_INT_ENABLE_D1 (like the downstream kernel) [no send upstream > yet] > - configured the 'LCDIF_NOC_HURRY' to 0x7 (highest prio) like you do > within your downstream TF-A. > > But this didn't worked for us and for Marek if I got him correctly. My > question is: Is the PANIC mode working as documented or are there some > missing bits? > > You can test if the PANIC mode is working on the downstream kernel by > reducing the AXI burst size back to 64B, connect a display with at least > 1080P and do some heavy memory access. If panic mode is working you > shouldn't see any display artifacts. I tested this before increasing the > AXI burst size by setting the HDMI-LCDIF prio staticlly to a high prio > like 0x5 and it was working with a 64B AXI burst size. [1] https://lore.kernel.org/linux-arm-kernel/20220703091132.1412063-1-peng.fan@xxxxxxxxxxx/ https://lore.kernel.org/all/20220708085632.1918323-1-peng.fan@xxxxxxxxxxx/