On Thu, Apr 30, 2020 at 10:31 AM Schrempf Frieder <frieder.schrempf@xxxxxxxxxx> wrote: > > Hi Lucas, > > On 30.04.20 16:32, Lucas Stach wrote: > > Hi Frieder, > > > > Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder: > >> From: Frieder Schrempf <frieder.schrempf@xxxxxxxxxx> > >> > >> On i.MX8MM there is an interrupt getting triggered immediately after > >> requesting the IRQ, which leads to a stall as the handler accesses > >> the GPU registers whithout the clock being enabled. > >> > >> Enabling the clocks briefly seems to clear the IRQ state, so we do > >> this before requesting the IRQ. > > > > This is most likely caused by improper power-up sequencing. Normally > > the GPC will trigger a hardware reset of the modules inside a power > > domain when the domain is powered on. This requires the clocks to be > > running at this point, as those resets are synchronous, so need clock > > pulses to propagate through the hardware. > > Ok, I was suspecting something like that and your explanation makes > total sense to me. > > > > > From what I see the i.MX8MM is still missing the power domain > > controller integration, but I'm pretty confident that this problem > > should be solved in the power domain code, instead of the GPU driver. > > Ok. I was hoping that GPU support could be added without power domain > control, but I now see that this is probably not reasonable at all. > So I will keep on hoping that NXP comes up with an upstreamable solution > for the power domain handling. There was a patch for upstream power-domain control from NXP a few days ago: https://patchwork.kernel.org/cover/10904511/ Can these be somehow tested to see if it helps the issue with the GPU? adam > > Thanks, > Frieder > > > > > Regards, > > Lucas > > > >> Signed-off-by: Frieder Schrempf <frieder.schrempf@xxxxxxxxxx> > >> --- > >> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 29 ++++++++++++++++++++----- > >> -- > >> 1 file changed, 22 insertions(+), 7 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c > >> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c > >> index a31eeff2b297..23877c1f150a 100644 > >> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c > >> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c > >> @@ -1775,13 +1775,6 @@ static int etnaviv_gpu_platform_probe(struct > >> platform_device *pdev) > >> return gpu->irq; > >> } > >> > >> - err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0, > >> - dev_name(gpu->dev), gpu); > >> - if (err) { > >> - dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, > >> err); > >> - return err; > >> - } > >> - > >> /* Get Clocks: */ > >> gpu->clk_reg = devm_clk_get(&pdev->dev, "reg"); > >> DBG("clk_reg: %p", gpu->clk_reg); > >> @@ -1805,6 +1798,28 @@ static int etnaviv_gpu_platform_probe(struct > >> platform_device *pdev) > >> gpu->clk_shader = NULL; > >> gpu->base_rate_shader = clk_get_rate(gpu->clk_shader); > >> > >> + /* > >> + * On i.MX8MM there is an interrupt getting triggered > >> immediately > >> + * after requesting the IRQ, which leads to a stall as the > >> handler > >> + * accesses the GPU registers whithout the clock being enabled. > >> + * Enabling the clocks briefly seems to clear the IRQ state, so > >> we do > >> + * this here before requesting the IRQ. > >> + */ > >> + err = etnaviv_gpu_clk_enable(gpu); > >> + if (err) > >> + return err; > >> + > >> + err = etnaviv_gpu_clk_disable(gpu); > >> + if (err) > >> + return err; > >> + > >> + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0, > >> + dev_name(gpu->dev), gpu); > >> + if (err) { > >> + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, > >> err); > >> + return err; > >> + } > >> + > >> /* TODO: figure out max mapped size */ > >> dev_set_drvdata(dev, gpu); > >> > > _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel