From: Rob Clark <robdclark@xxxxxxxxxxxx> I've been spending some time looking into how things behave under high memory pressure. The first patch is a random cleanup I noticed along the way. The second improves the situation significantly when we are getting shrinker called from many threads in parallel. And the last two are $debugfs/gem fixes I needed so I could monitor the state of GEM objects (ie. how many are active/purgable/purged) while triggering high memory pressure. We could probably go a bit further with dropping the mm_lock in the shrinker->scan() loop, but this is already a pretty big improvement. The next step is probably actually to add support to unpin/evict inactive objects. (We are part way there since we have already de- coupled the iova lifetime from the pages lifetime, but there are a few sharp corners to work through.) Rob Clark (4): drm/msm: Remove unused freed llist node drm/msm: Avoid mutex in shrinker_count() drm/msm: Fix debugfs deadlock drm/msm: Improved debugfs gem stats drivers/gpu/drm/msm/msm_debugfs.c | 14 ++---- drivers/gpu/drm/msm/msm_drv.c | 4 ++ drivers/gpu/drm/msm/msm_drv.h | 10 ++++- drivers/gpu/drm/msm/msm_fb.c | 3 +- drivers/gpu/drm/msm/msm_gem.c | 61 +++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem.h | 58 +++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 17 +------ 7 files changed, 122 insertions(+), 45 deletions(-) -- 2.30.2 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel