On 26/08/2019 23:33, Rob Herring wrote: > Currently, page tables are freed without disabling the address space first. > This probably is fine as we'll switch to new page tables when the address > space is allocated again and runtime PM suspend will reset the GPU > clearing the registers. However, it's better to clean up after ourselves. > There is also a problem that we could be accessing the h/w in > tlb_inv_context() when suspended. > > Rework the disable code to make sure we flush caches/TLBs and disable the > address space before freeing the page tables if we are not suspended. As > the tlb_inv_context() hook is only called when freeing the page tables and > we do a flush before disabling the AS, lets remove the flush from > tlb_inv_context and avoid any runtime PM issues. > > Fixes: 7282f7645d06 ("drm/panfrost: Implement per FD address spaces") > Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx> > Cc: Steven Price <steven.price@xxxxxxx> > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@xxxxxxxxxxxxx> > Cc: David Airlie <airlied@xxxxxxxx> > Cc: Daniel Vetter <daniel@xxxxxxxx> > Signed-off-by: Rob Herring <robh@xxxxxxxxxx> > --- > v3: > - New patch replacing "drm/panfrost: Remove unnecessary flushing from tlb_inv_context" > > drivers/gpu/drm/panfrost/panfrost_mmu.c | 16 +++++++++------- > 1 file changed, 9 insertions(+), 7 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index d1ebde3327fe..387d830cb7cf 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -129,8 +129,10 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m > write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE); > } > > -static void mmu_disable(struct panfrost_device *pfdev, u32 as_nr) > +static void panfrost_mmu_disable(struct panfrost_device *pfdev, u32 as_nr) > { > + mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM); > + > mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0); > mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0); > At the end of this function we have: | write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE); which should negate the need for AS_COMMAND_FLUSH_MEM as well. However one thing I have just noticed is that write_cmd() doesn't wait for AS_ACTIVE to be cleared. This means that the GPU has accepted the command but might not have finished the flush. When freeing page tables we obviously need to wait for the MMU flush to complete. The extra mmu_hw_do_operation_locked() 'fixes' this partly because there's a back-to-back set of MMU commands so the second one will be blocked until AS_COMMAND_FLUSH_MEM has completed, but also mmu_hw_do_operation() waits for the flush to complete. I'm not really sure why mmu_enable()/mmu_disable() have bare calls to write_cmd - could they use mmu_hw_do_operation_locked() instead? > @@ -321,11 +323,7 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) > } > > static void mmu_tlb_inv_context_s1(void *cookie) > -{ > - struct panfrost_file_priv *priv = cookie; > - > - mmu_hw_do_operation(priv->pfdev, &priv->mmu, 0, ~0UL, AS_COMMAND_FLUSH_MEM); > -} > +{} > > static void mmu_tlb_inv_range_nosync(unsigned long iova, size_t size, > size_t granule, bool leaf, void *cookie) > @@ -374,6 +372,11 @@ void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv) > > spin_lock(&pfdev->as_lock); > if (mmu->as >= 0) { > + pm_runtime_get_noresume(pfdev->dev); > + if (pm_runtime_active(pfdev->dev)) > + panfrost_mmu_disable(pfdev, mmu->as); > + pm_runtime_put_autosuspend(pfdev->dev); > + > clear_bit(mmu->as, &pfdev->as_alloc_mask); > clear_bit(mmu->as, &pfdev->as_in_use_mask); > list_del(&mmu->list); > @@ -618,5 +621,4 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) > void panfrost_mmu_fini(struct panfrost_device *pfdev) > { > mmu_write(pfdev, MMU_INT_MASK, 0); > - mmu_disable(pfdev, 0); > } > -- > 2.20.1 > _______________________________________________ > dri-devel mailing list > dri-devel@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/dri-devel > _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel