On Mon, 5 Oct 2020 20:40:12 Rob Clark <robdclark@xxxxxxxxx> wrote: > On Mon, Oct 5, 2020 at 5:44 PM Hillf Danton <hdanton@xxxxxxxx> wrote: > > On Mon, 5 Oct 2020 18:17:01 Kristian H. Kristensen wrote: > > > On Mon, Oct 5, 2020 at 4:02 PM Daniel Vetter <daniel@xxxxxxxx> wrote: > > > > > > > > On Mon, Oct 05, 2020 at 05:24:19PM +0800, Hillf Danton wrote: > > > > > > > > > > On Sun, 4 Oct 2020 12:21:45 > > > > > > From: Rob Clark <robdclark@xxxxxxxxxxxx> > > > > > > > > > > > > Now that the inactive_list is protected by mm_lock, and everything > > > > > > else on per-obj basis is protected by obj->lock, we no longer depend > > > > > > on struct_mutex. > > > > > > > > > > > > Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx> > > > > > > --- > > > > > > drivers/gpu/drm/msm/msm_gem.c | 1 - > > > > > > drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- > > > > > > 2 files changed, 55 deletions(-) > > > > > > > > > > > [...] > > > > > > > > > > > @@ -71,13 +33,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) > > > > > > { > > > > > > struct msm_drm_private *priv = > > > > > > container_of(shrinker, struct msm_drm_private, shrinker); > > > > > > - struct drm_device *dev = priv->dev; > > > > > > struct msm_gem_object *msm_obj; > > > > > > unsigned long freed = 0; > > > > > > - bool unlock; > > > > > > - > > > > > > - if (!msm_gem_shrinker_lock(dev, &unlock)) > > > > > > - return SHRINK_STOP; > > > > > > > > > > > > mutex_lock(&priv->mm_lock); > > > > > > > > > > Better if the change in behavior is documented that SHRINK_STOP will > > > > > no longer be needed. > > > > > > > > btw I read through this and noticed you have your own obj lock, plus > > > > mutex_lock_nested. I strongly recommend to just cut over to dma_resv_lock > > > > for all object lock needs (soc drivers have been terrible with this > > > > unfortuntaly), and in the shrinker just use dma_resv_trylock instead of > > > > trying to play clever games outsmarting lockdep. > > > > The trylock makes page reclaimers turn to their next target e.g. inode > > cache instead of waiting for the mutex to be released. It makes sense > > for instance in scenarios of mild memory pressure. > > is there some behind-the-scenes signalling for this, or is this just > down to what the shrinker callbacks return? Lets see what Dave may have in his mind about your questions. > Generally when we get > into shrinking, there are a big set of purgable bo's to consider, so > the shrinker callback return wouldn't be considering just one > potentially lock contended bo (buffer object). Ie failing one > trylock, we just move on to the next. > > fwiw, what I've seen on the userspace bo cache vs shrinker (anything > that is shrinker potential is in userspace bo cache and > MADV(WONTNEED)) is that in steady state I see a very strong recycling > of bo's (which avoids allocating and mmap'ing or mapping to gpu a new > buffer object), so it is definitely a win in mmap/realloc bandwidth.. > in steady state there is a lot of free and realloc of same-sized > buffers from frame to frame. > > But in transient situations like moving to new game level when there > is a heavy memory pressure and lots of freeing old > buffers/textures/etc and then allocating new ones, I see shrinker > kicking in hard (in android situations, not so much so with > traditional linux userspace) > > BR, > -R > > > > > > > > > > > I recently wrote an entire blog length rant on why I think > > > > mutex_lock_nested is too dangerous to be useful: > > > > > > > > https://blog.ffwll.ch/2020/08/lockdep-false-positives.html > > > > > > > > Not anything about this here, just general comment. The problem extends to > > > > shmem helpers and all that also having their own locks for everything. > > > > > > This is definitely a tangible improvement though - very happy to see > > > msm_gem_shrinker_lock() go. > > > > > > Reviewed-by: Kristian H. Kristensen <hoegsberg@xxxxxxxxxx> > > > > > > > -Daniel > > > > -- > > > > Daniel Vetter > > > > Software Engineer, Intel Corporation > > > > http://blog.ffwll.ch > > > > _______________________________________________ > > > > dri-devel mailing list > > > > dri-devel@xxxxxxxxxxxxxxxxxxxxx > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel