From: Christian König <ckoenig.leichtzumerken@xxxxxxxxx> While unplugging a device the TTM shrinker implementation needs a barrier to make sure that all concurrent shrink operations are done and no other CPU is referring to a device specific pool any more. Taking and releasing the shrinker semaphore on the write side after unmapping and freeing all pages from the device pool should make sure that no shrinker is running in paralell. This allows us to avoid the contented mutex in the TTM pool implementation for every alloc/free operation. v2: rework the commit message to make clear why we need this Signed-off-by: Christian König <christian.koenig@xxxxxxx> Acked-by: Huang Rui <ray.huang@xxxxxxx> Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> --- include/linux/shrinker.h | 1 + mm/vmscan.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 9814fff58a69..1de17f53cdbc 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -93,4 +93,5 @@ extern void register_shrinker_prepared(struct shrinker *shrinker); extern int register_shrinker(struct shrinker *shrinker); extern void unregister_shrinker(struct shrinker *shrinker); extern void free_prealloced_shrinker(struct shrinker *shrinker); +extern void sync_shrinkers(void); #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 4620df62f0ff..fde1aabcfa7f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -638,6 +638,16 @@ void unregister_shrinker(struct shrinker *shrinker) } EXPORT_SYMBOL(unregister_shrinker); +/** + * sync_shrinker - Wait for all running shrinkers to complete. + */ +void sync_shrinkers(void) +{ + down_write(&shrinker_rwsem); + up_write(&shrinker_rwsem); +} +EXPORT_SYMBOL(sync_shrinkers); + #define SHRINK_BATCH 128 static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, -- 2.25.1