On Fri, Aug 23, 2019 at 11:16 AM Robin Murphy <robin.murphy@xxxxxxx> wrote: > > On 23/08/2019 16:57, Rob Herring wrote: > > On Fri, Aug 23, 2019 at 10:44 AM Robin Murphy <robin.murphy@xxxxxxx> wrote: > >> > >> On 23/08/2019 16:05, Steven Price wrote: > >>> On 23/08/2019 12:11, Robin Murphy wrote: > >>>> On 23/08/2019 03:12, Rob Herring wrote: > >>>>> There is no point in resuming the h/w just to do flush operations and > >>>>> doing so takes several locks which cause lockdep issues with the > >>>>> shrinker. > >>>>> Rework the flush operations to only happen when the h/w is already > >>>>> awake. > >>>>> This avoids taking any locks associated with resuming. > >>>>> > >>>>> Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx> > >>>>> Cc: Steven Price <steven.price@xxxxxxx> > >>>>> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@xxxxxxxxxxxxx> > >>>>> Cc: David Airlie <airlied@xxxxxxxx> > >>>>> Cc: Daniel Vetter <daniel@xxxxxxxx> > >>>>> Signed-off-by: Rob Herring <robh@xxxxxxxxxx> > >>>>> --- > >>>>> v2: new patch > >>>>> > >>>>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 41 ++++++++++++------------- > >>>>> 1 file changed, 20 insertions(+), 21 deletions(-) > >>>>> > >>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c > >>>>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c > >>>>> index 842bdd7cf6be..ccf671a9c3fb 100644 > >>>>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > >>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > >>>>> @@ -220,6 +220,23 @@ static size_t get_pgsize(u64 addr, size_t size) > >>>>> return SZ_2M; > >>>>> } > >>>>> +void panfrost_mmu_flush_range(struct panfrost_device *pfdev, > >>>>> + struct panfrost_mmu *mmu, > >>>>> + u64 iova, size_t size) > >>>>> +{ > >>>>> + if (mmu->as < 0) > >>>>> + return; > >>>>> + > >>>>> + /* Flush the PTs only if we're already awake */ > >>>>> + if (!pm_runtime_get_if_in_use(pfdev->dev)) > >>>>> + return; > >>>> > >>>> Is the MMU guaranteed to be reset on resume such that the TLBs will > >>>> always come up clean? Otherwise there are potentially corners where > >>>> stale entries that we skip here might hang around if the hardware lies > >>>> about powering things down. > >>> > >>> Assuming runtime PM has actually committed to the power off, then on > >>> power on panfrost_device_resume() will be called which performs a reset > >>> of the GPU which will clear the L2/TLBs so there shouldn't be any > >>> problem there. > >> > >> OK, if panfrost_gpu_soft_reset() is sufficient to guarantee clean TLBs > >> then this looks equivalent to what we did for arm-smmu, so I've no > >> complaints in that regard. > >> > >> However on second look I've now noticed the panfrost_mmu_flush_range() > >> calls being moved outside of mmu->lock protection. Forgive me if there's > >> basic DRM knowledge I'm missing here, but is there any possibility for > >> multiple threads to create/import/free objects simultaneously on the > >> same FD such that two mmu_hw_do_operation() calls could race and > >> interfere with each other in terms of the > >> AS_LOCKADDR/AS_COMMAND/AS_STATUS dance? > > > > Yes, we could have multiple threads. Not really any good reason it's > > moved out of the mmu->lock other than just to avoid any nesting > > (though that seemed fine in testing). The newly added as_lock will > > serialize mmu_hw_do_operation(). So the mmu->lock is just serializing > > page table writes. > > Urgh, sorry, once again I'd stopped looking at -next and was > cross-referencing my current rc3-based working tree :( > > In that case, you may even be able to remove mmu->lock entirely, since > io-pgtable-arm doesn't need external locking itself. And for this patch, I was wondering about that, but hadn't gotten around to investigating. > > Reviewed-by: Robin Murphy <robin.murphy@xxxxxxx> > > Cheers, > Robin. _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel