The patch titled Subject: mm: shrinker: add infrastructure for dynamically allocating shrinker has been added to the -mm mm-unstable branch. Its filename is mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Subject: mm: shrinker: add infrastructure for dynamically allocating shrinker Date: Mon, 11 Sep 2023 17:44:00 +0800 Patch series "use refcount+RCU method to implement lockless slab shrink", v6. 1. Background ============= We used to implement the lockless slab shrink with SRCU [1], but then kernel test robot reported -88.8% regression in stress-ng.ramfs.ops_per_sec test case [2], so we reverted it [3]. This patch series aims to re-implement the lockless slab shrink using the refcount+RCU method proposed by Dave Chinner [4]. [1]. https://lore.kernel.org/lkml/20230313112819.38938-1-zhengqi.arch@xxxxxxxxxxxxx/ [2]. https://lore.kernel.org/lkml/202305230837.db2c233f-yujie.liu@xxxxxxxxx/ [3]. https://lore.kernel.org/all/20230609081518.3039120-1-qi.zheng@xxxxxxxxx/ [4]. https://lore.kernel.org/lkml/ZIJhou1d55d4H1s0@xxxxxxxxxxxxxxxxxxx/ 2. Implementation ================= Currently, the shrinker instances can be divided into the following three types: a) global shrinker instance statically defined in the kernel, such as workingset_shadow_shrinker. b) global shrinker instance statically defined in the kernel modules, such as mmu_shrinker in x86. c) shrinker instance embedded in other structures. For case a, the memory of shrinker instance is never freed. For case b, the memory of shrinker instance will be freed after synchronize_rcu() when the module is unloaded. For case c, the memory of shrinker instance will be freed along with the structure it is embedded in. In preparation for implementing lockless slab shrink, we need to dynamically allocate those shrinker instances in case c, then the memory can be dynamically freed alone by calling kfree_rcu(). This patchset adds the following new APIs for dynamically allocating shrinker, and add a private_data field to struct shrinker to record and get the original embedded structure. 1. shrinker_alloc() 2. shrinker_register() 3. shrinker_free() In order to simplify shrinker-related APIs and make shrinker more independent of other kernel mechanisms, this patchset uses the above APIs to convert all shrinkers (including case a and b) to dynamically allocated, and then remove all existing APIs. This will also have another advantage mentioned by Dave Chinner: ``` The other advantage of this is that it will break all the existing out of tree code and third party modules using the old API and will no longer work with a kernel using lockless slab shrinkers. They need to break (both at the source and binary levels) to stop bad things from happening due to using uncoverted shrinkers in the new setup. ``` Then we free the shrinker by calling call_rcu(), and use rcu_read_{lock,unlock}() to ensure that the shrinker instance is valid. And the shrinker::refcount mechanism ensures that the shrinker instance will not be run again after unregistration. So the structure that records the pointer of shrinker instance can be safely freed without waiting for the RCU read-side critical section. In this way, while we implement the lockless slab shrink, we don't need to be blocked in unregister_shrinker() to wait RCU read-side critical section. PATCH 1: introduce new APIs PATCH 2~38: convert all shrinnkers to use new APIs PATCH 39: remove old APIs PATCH 40~41: some cleanups and preparations PATCH 42-43: implement the lockless slab shrink PATCH 44~45: convert shrinker_rwsem to mutex 3. Testing ========== 3.1 slab shrink stress test --------------------------- We can reproduce the down_read_trylock() hotspot through the following script: ``` DIR="/root/shrinker/memcg/mnt" do_create() { mkdir -p /sys/fs/cgroup/memory/test echo 4G > /sys/fs/cgroup/memory/test/memory.limit_in_bytes for i in `seq 0 $1`; do mkdir -p /sys/fs/cgroup/memory/test/$i; echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; mkdir -p $DIR/$i; done } do_mount() { for i in `seq $1 $2`; do mount -t tmpfs $i $DIR/$i; done } do_touch() { for i in `seq $1 $2`; do echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; dd if=/dev/zero of=$DIR/$i/file$i bs=1M count=1 & done } case "$1" in touch) do_touch $2 $3 ;; test) do_create 4000 do_mount 0 4000 do_touch 0 3000 ;; *) exit 1 ;; esac ``` Save the above script, then run test and touch commands. Then we can use the following perf command to view hotspots: perf top -U -F 999 1) Before applying this patchset: 33.15% [kernel] [k] down_read_trylock 25.38% [kernel] [k] shrink_slab 21.75% [kernel] [k] up_read 4.45% [kernel] [k] _find_next_bit 2.27% [kernel] [k] do_shrink_slab 1.80% [kernel] [k] intel_idle_irq 1.79% [kernel] [k] shrink_lruvec 0.67% [kernel] [k] xas_descend 0.41% [kernel] [k] mem_cgroup_iter 0.40% [kernel] [k] shrink_node 0.38% [kernel] [k] list_lru_count_one 2) After applying this patchset: 64.56% [kernel] [k] shrink_slab 12.18% [kernel] [k] do_shrink_slab 3.30% [kernel] [k] __rcu_read_unlock 2.61% [kernel] [k] shrink_lruvec 2.49% [kernel] [k] __rcu_read_lock 1.93% [kernel] [k] intel_idle_irq 0.89% [kernel] [k] shrink_node 0.81% [kernel] [k] mem_cgroup_iter 0.77% [kernel] [k] mem_cgroup_calculate_protection 0.66% [kernel] [k] list_lru_count_one We can see that the first perf hotspot becomes shrink_slab, which is what we expect. 3.2 registration and unregistration stress test ----------------------------------------------- Run the command below to test: stress-ng --timeout 60 --times --verify --metrics-brief --ramfs 9 & 1) Before applying this patchset: setting to a 60 second run per stressor dispatching hogs: 9 ramfs stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s (secs) (secs) (secs) (real time) (usr+sys time) ramfs 473062 60.00 8.00 279.13 7884.12 1647.59 for a 60.01s run time: 1440.34s available CPU time 7.99s user time ( 0.55%) 279.13s system time ( 19.38%) 287.12s total time ( 19.93%) load average: 7.12 2.99 1.15 successful run completed in 60.01s (1 min, 0.01 secs) 2) After applying this patchset: setting to a 60 second run per stressor dispatching hogs: 9 ramfs stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s (secs) (secs) (secs) (real time) (usr+sys time) ramfs 477165 60.00 8.13 281.34 7952.55 1648.40 for a 60.01s run time: 1440.33s available CPU time 8.12s user time ( 0.56%) 281.34s system time ( 19.53%) 289.46s total time ( 20.10%) load average: 6.98 3.03 1.19 successful run completed in 60.01s (1 min, 0.01 secs) We can see that the ops/s has hardly changed. This patch (of 45): Currently, the shrinker instances can be divided into the following three types: a) global shrinker instance statically defined in the kernel, such as workingset_shadow_shrinker. b) global shrinker instance statically defined in the kernel modules, such as mmu_shrinker in x86. c) shrinker instance embedded in other structures. For case a, the memory of shrinker instance is never freed. For case b, the memory of shrinker instance will be freed after synchronize_rcu() when the module is unloaded. For case c, the memory of shrinker instance will be freed along with the structure it is embedded in. In preparation for implementing lockless slab shrink, we need to dynamically allocate those shrinker instances in case c, then the memory can be dynamically freed alone by calling kfree_rcu(). So this commit adds the following new APIs for dynamically allocating shrinker, and add a private_data field to struct shrinker to record and get the original embedded structure. 1. shrinker_alloc() Used to allocate shrinker instance itself and related memory, it will return a pointer to the shrinker instance on success and NULL on failure. 2. shrinker_register() Used to register the shrinker instance, which is same as the current register_shrinker_prepared(). 3. shrinker_free() Used to unregister (if needed) and free the shrinker instance. In order to simplify shrinker-related APIs and make shrinker more independent of other kernel mechanisms, subsequent submissions will use the above API to convert all shrinkers (including case a and b) to dynamically allocated, and then remove all existing APIs. This will also have another advantage mentioned by Dave Chinner: ``` The other advantage of this is that it will break all the existing out of tree code and third party modules using the old API and will no longer work with a kernel using lockless slab shrinkers. They need to break (both at the source and binary levels) to stop bad things from happening due to using unconverted shrinkers in the new setup. ``` Link: https://lkml.kernel.org/r/20230911094444.68966-1-zhengqi.arch@xxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20230911094444.68966-2-zhengqi.arch@xxxxxxxxxxxxx Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Cc: Christian Brauner <brauner@xxxxxxxxxx> Cc: Chuck Lever <cel@xxxxxxxxxx> Cc: Darrick J. Wong <djwong@xxxxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Kirill Tkhai <tkhai@xxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Cc: Steven Price <steven.price@xxxxxxx> Cc: Theodore Ts'o <tytso@xxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Abhinav Kumar <quic_abhinavk@xxxxxxxxxxx> Cc: Alasdair Kergon <agk@xxxxxxxxxx> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@xxxxxxxxxxxxx> Cc: Andreas Dilger <adilger.kernel@xxxxxxxxx> Cc: Andreas Gruenbacher <agruenba@xxxxxxxxxx> Cc: Anna Schumaker <anna@xxxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Bob Peterson <rpeterso@xxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Carlos Llamas <cmllamas@xxxxxxxxxx> Cc: Chandan Babu R <chandan.babu@xxxxxxxxxx> Cc: Chao Yu <chao@xxxxxxxxxx> Cc: Chris Mason <clm@xxxxxx> Cc: Christian Koenig <christian.koenig@xxxxxxx> Cc: Coly Li <colyli@xxxxxxx> Cc: Dai Ngo <Dai.Ngo@xxxxxxxxxx> Cc: Daniel Vetter <daniel@xxxxxxxx> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: David Airlie <airlied@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: David Sterba <dsterba@xxxxxxxx> Cc: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx> Cc: Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> Cc: Huang Rui <ray.huang@xxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Jaegeuk Kim <jaegeuk@xxxxxxxxxx> Cc: Jani Nikula <jani.nikula@xxxxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jason Wang <jasowang@xxxxxxxxxx> Cc: Jeff Layton <jlayton@xxxxxxxxxx> Cc: Jeffle Xu <jefflexu@xxxxxxxxxxxxxxxxx> Cc: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Cc: Josef Bacik <josef@xxxxxxxxxxxxxx> Cc: Juergen Gross <jgross@xxxxxxxx> Cc: Kent Overstreet <kent.overstreet@xxxxxxxxx> Cc: Marijn Suijten <marijn.suijten@xxxxxxxxxxxxxx> Cc: "Michael S. Tsirkin" <mst@xxxxxxxxxx> Cc: Mike Snitzer <snitzer@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Nadav Amit <namit@xxxxxxxxxx> Cc: Neil Brown <neilb@xxxxxxx> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Cc: Olga Kornievskaia <kolga@xxxxxxxxxx> Cc: Richard Weinberger <richard@xxxxxx> Cc: Rob Clark <robdclark@xxxxxxxxx> Cc: Rob Herring <robh@xxxxxxxxxx> Cc: Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx> Cc: Sean Paul <sean@xxxxxxxxxx> Cc: Song Liu <song@xxxxxxxxxx> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx> Cc: Tom Talpey <tom@xxxxxxxxxx> Cc: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> Cc: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> Cc: Yue Hu <huyue2@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/shrinker.h | 7 ++ mm/internal.h | 11 +++ mm/shrinker.c | 102 +++++++++++++++++++++++++++++++++++++ mm/shrinker_debug.c | 17 +++++- 4 files changed, 135 insertions(+), 2 deletions(-) --- a/include/linux/shrinker.h~mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker +++ a/include/linux/shrinker.h @@ -70,6 +70,8 @@ struct shrinker { int seeks; /* seeks to recreate an obj */ unsigned flags; + void *private_data; + /* These are for internal use */ struct list_head list; #ifdef CONFIG_MEMCG @@ -95,6 +97,11 @@ struct shrinker { * non-MEMCG_AWARE shrinker should not have this flag set. */ #define SHRINKER_NONSLAB (1 << 3) +#define SHRINKER_ALLOCATED (1 << 4) + +struct shrinker *shrinker_alloc(unsigned int flags, const char *fmt, ...); +void shrinker_register(struct shrinker *shrinker); +void shrinker_free(struct shrinker *shrinker); extern int __printf(2, 3) prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...); --- a/mm/internal.h~mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker +++ a/mm/internal.h @@ -1162,6 +1162,9 @@ unsigned long shrink_slab(gfp_t gfp_mask #ifdef CONFIG_SHRINKER_DEBUG extern int shrinker_debugfs_add(struct shrinker *shrinker); +extern int shrinker_debugfs_name_alloc(struct shrinker *shrinker, + const char *fmt, va_list ap); +extern void shrinker_debugfs_name_free(struct shrinker *shrinker); extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, int *debugfs_id); extern void shrinker_debugfs_remove(struct dentry *debugfs_entry, @@ -1171,6 +1174,14 @@ static inline int shrinker_debugfs_add(s { return 0; } +static inline int shrinker_debugfs_name_alloc(struct shrinker *shrinker, + const char *fmt, va_list ap) +{ + return 0; +} +static inline void shrinker_debugfs_name_free(struct shrinker *shrinker) +{ +} static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, int *debugfs_id) { --- a/mm/shrinker.c~mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker +++ a/mm/shrinker.c @@ -550,6 +550,108 @@ out: return freed; } +struct shrinker *shrinker_alloc(unsigned int flags, const char *fmt, ...) +{ + struct shrinker *shrinker; + unsigned int size; + va_list ap; + int err; + + shrinker = kzalloc(sizeof(struct shrinker), GFP_KERNEL); + if (!shrinker) + return NULL; + + va_start(ap, fmt); + err = shrinker_debugfs_name_alloc(shrinker, fmt, ap); + va_end(ap); + if (err) + goto err_name; + + shrinker->flags = flags | SHRINKER_ALLOCATED; + shrinker->seeks = DEFAULT_SEEKS; + + if (flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (err == -ENOSYS) + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + else if (err == 0) + goto done; + else + goto err_flags; + } + + /* + * The nr_deferred is available on per memcg level for memcg aware + * shrinkers, so only allocate nr_deferred in the following cases: + * - non memcg aware shrinkers + * - !CONFIG_MEMCG + * - memcg is disabled by kernel command line + */ + size = sizeof(*shrinker->nr_deferred); + if (flags & SHRINKER_NUMA_AWARE) + size *= nr_node_ids; + + shrinker->nr_deferred = kzalloc(size, GFP_KERNEL); + if (!shrinker->nr_deferred) + goto err_flags; + +done: + return shrinker; + +err_flags: + shrinker_debugfs_name_free(shrinker); +err_name: + kfree(shrinker); + return NULL; +} +EXPORT_SYMBOL_GPL(shrinker_alloc); + +void shrinker_register(struct shrinker *shrinker) +{ + if (unlikely(!(shrinker->flags & SHRINKER_ALLOCATED))) { + pr_warn("Must use shrinker_alloc() to dynamically allocate the shrinker"); + return; + } + + down_write(&shrinker_rwsem); + list_add_tail(&shrinker->list, &shrinker_list); + shrinker->flags |= SHRINKER_REGISTERED; + shrinker_debugfs_add(shrinker); + up_write(&shrinker_rwsem); +} +EXPORT_SYMBOL_GPL(shrinker_register); + +void shrinker_free(struct shrinker *shrinker) +{ + struct dentry *debugfs_entry = NULL; + int debugfs_id; + + if (!shrinker) + return; + + down_write(&shrinker_rwsem); + if (shrinker->flags & SHRINKER_REGISTERED) { + list_del(&shrinker->list); + debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id); + shrinker->flags &= ~SHRINKER_REGISTERED; + } else { + shrinker_debugfs_name_free(shrinker); + } + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); + up_write(&shrinker_rwsem); + + if (debugfs_entry) + shrinker_debugfs_remove(debugfs_entry, debugfs_id); + + kfree(shrinker->nr_deferred); + shrinker->nr_deferred = NULL; + + kfree(shrinker); +} +EXPORT_SYMBOL_GPL(shrinker_free); + /* * Add a shrinker callback to be called from the vm. */ --- a/mm/shrinker_debug.c~mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker +++ a/mm/shrinker_debug.c @@ -193,6 +193,20 @@ int shrinker_debugfs_add(struct shrinker return 0; } +int shrinker_debugfs_name_alloc(struct shrinker *shrinker, const char *fmt, + va_list ap) +{ + shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap); + + return shrinker->name ? 0 : -ENOMEM; +} + +void shrinker_debugfs_name_free(struct shrinker *shrinker) +{ + kfree_const(shrinker->name); + shrinker->name = NULL; +} + int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...) { struct dentry *entry; @@ -241,8 +255,7 @@ struct dentry *shrinker_debugfs_detach(s lockdep_assert_held(&shrinker_rwsem); - kfree_const(shrinker->name); - shrinker->name = NULL; + shrinker_debugfs_name_free(shrinker); *debugfs_id = entry ? shrinker->debugfs_id : -1; shrinker->debugfs_entry = NULL; _ Patches currently in -mm which might be from zhengqi.arch@xxxxxxxxxxxxx are mm-move-some-shrinker-related-function-declarations-to-mm-internalh.patch mm-vmscan-move-shrinker-related-code-into-a-separate-file.patch mm-shrinker-remove-redundant-shrinker_rwsem-in-debugfs-operations.patch drm-ttm-introduce-pool_shrink_rwsem.patch mm-shrinker-add-infrastructure-for-dynamically-allocating-shrinker.patch kvm-mmu-dynamically-allocate-the-x86-mmu-shrinker.patch binder-dynamically-allocate-the-android-binder-shrinker.patch drm-ttm-dynamically-allocate-the-drm-ttm_pool-shrinker.patch xenbus-backend-dynamically-allocate-the-xen-backend-shrinker.patch erofs-dynamically-allocate-the-erofs-shrinker.patch f2fs-dynamically-allocate-the-f2fs-shrinker.patch gfs2-dynamically-allocate-the-gfs2-glock-shrinker.patch gfs2-dynamically-allocate-the-gfs2-qd-shrinker.patch nfsv42-dynamically-allocate-the-nfs-xattr-shrinkers.patch nfs-dynamically-allocate-the-nfs-acl-shrinker.patch nfsd-dynamically-allocate-the-nfsd-filecache-shrinker.patch quota-dynamically-allocate-the-dquota-cache-shrinker.patch ubifs-dynamically-allocate-the-ubifs-slab-shrinker.patch rcu-dynamically-allocate-the-rcu-lazy-shrinker.patch rcu-dynamically-allocate-the-rcu-kfree-shrinker.patch mm-thp-dynamically-allocate-the-thp-related-shrinkers.patch sunrpc-dynamically-allocate-the-sunrpc_cred-shrinker.patch mm-workingset-dynamically-allocate-the-mm-shadow-shrinker.patch drm-i915-dynamically-allocate-the-i915_gem_mm-shrinker.patch drm-msm-dynamically-allocate-the-drm-msm_gem-shrinker.patch drm-panfrost-dynamically-allocate-the-drm-panfrost-shrinker.patch dm-dynamically-allocate-the-dm-bufio-shrinker.patch dm-zoned-dynamically-allocate-the-dm-zoned-meta-shrinker.patch md-raid5-dynamically-allocate-the-md-raid5-shrinker.patch bcache-dynamically-allocate-the-md-bcache-shrinker.patch vmw_balloon-dynamically-allocate-the-vmw-balloon-shrinker.patch virtio_balloon-dynamically-allocate-the-virtio-balloon-shrinker.patch mbcache-dynamically-allocate-the-mbcache-shrinker.patch ext4-dynamically-allocate-the-ext4-es-shrinker.patch jbd2ext4-dynamically-allocate-the-jbd2-journal-shrinker.patch nfsd-dynamically-allocate-the-nfsd-client-shrinker.patch nfsd-dynamically-allocate-the-nfsd-reply-shrinker.patch xfs-dynamically-allocate-the-xfs-buf-shrinker.patch xfs-dynamically-allocate-the-xfs-inodegc-shrinker.patch xfs-dynamically-allocate-the-xfs-qm-shrinker.patch zsmalloc-dynamically-allocate-the-mm-zspool-shrinker.patch fs-super-dynamically-allocate-the-s_shrink.patch mm-shrinker-remove-old-apis.patch mm-shrinker-add-a-secondary-array-for-shrinker_info-map-nr_deferred.patch mm-shrinker-rename-preallocunregister_memcg_shrinker-to-shrinker_memcg_allocremove.patch mm-shrinker-make-global-slab-shrink-lockless.patch mm-shrinker-make-memcg-slab-shrink-lockless.patch mm-shrinker-hold-write-lock-to-reparent-shrinker-nr_deferred.patch mm-shrinker-convert-shrinker_rwsem-to-mutex.patch