Hello Steve, On 11/14/22 19:54, Steven Price wrote: > On 05/11/2022 23:27, Dmitry Osipenko wrote: >> Replace Panfrost's custom memory shrinker with a common drm-shmem >> memory shrinker. >> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> > > Sadly this triggers GPU faults under memory pressure - it looks > suspiciously like mappings are being freed while the jobs are still running. > > I'm not sure I understand how the generic shrinker replicates the > "gpu_usecount" atomic that Panfrost currently has, and I'm wondering if > that's the cause? > > Also just reverting this commit (so just patches 1-6) I can't actually > get Panfrost to purge any memory. So I don't think the changes (most > likely in patch 4) are quite right either. > > At the moment I don't have the time to investigate in detail. But if > you've any ideas for something specific I should look at I can run more > testing. Thank you for the testing! It just occurred to me that the shrinker callback lost the dma_resv_test_signaled() in comparison to the previous versions of this patchset. It appeared to me that the drm_gem_lru now checks whether reservation is busy, but it doesn't. I saw a similar page faults once in a while when was testing the Panfrost driver, but then couldn't reproduce the faults after applying the IOMMU unmap range fix that Robin made recently. I'll re-add the dma_resv_test_signaled() in v9, it was a luck that I didn't hit it much during my testing. -- Best regards, Dmitry