On 6/19/22 20:53, Rob Clark wrote: ... >> +static unsigned long >> +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, >> + struct shrink_control *sc) >> +{ >> + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); >> + struct drm_gem_shmem_object *shmem; >> + unsigned long count = 0; >> + >> + if (!mutex_trylock(&gem_shrinker->lock)) >> + return 0; >> + >> + list_for_each_entry(shmem, &gem_shrinker->lru_evictable, madv_list) { >> + count += shmem->base.size; >> + >> + if (count >= SHRINK_EMPTY) >> + break; >> + } >> + >> + mutex_unlock(&gem_shrinker->lock); > > As I mentioned on other thread, count_objects, being approximate but > lockless and fast is the important thing. Otherwise when you start > hitting the shrinker on many threads, you end up serializing them all, > even if you have no pages to return to the system at that point. Daniel's point for dropping the lockless variant was that we're already in trouble if we're hitting shrinker too often and extra optimizations won't bring much benefits to us. Alright, I'll add back the lockless variant (or will use yours drm_gem_lru) in the next revision. The code difference is very small after all. ... >> + /* prevent racing with the dma-buf importing/exporting */ >> + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) { >> + *lock_contention |= true; >> + goto resv_unlock; >> + } > > I'm not sure this is a good idea to serialize on object_name_lock. > Purgeable buffers should never be shared (imported or exported). So > at best you are avoiding evicting and immediately swapping back in, in > a rare case, at the cost of serializing multiple threads trying to > reclaim pages in parallel. The object_name_lock shouldn't cause contention in practice. But objects are also pinned on attachment, hence maybe this lock is indeed unnecessary.. I'll re-check it. -- Best regards, Dmitry