This is a note to let you know that I've just added the patch titled mbcache: don't reclaim used entries to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mbcache-don-t-reclaim-used-entries.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit f0057e035b042df0fbd7e425bad663ecc584e266 Author: Jan Kara <jack@xxxxxxx> Date: Tue Jul 12 12:54:20 2022 +0200 mbcache: don't reclaim used entries [ Upstream commit 58318914186c157477b978b1739dfe2f1b9dc0fe ] Do not reclaim entries that are currently used by somebody from a shrinker. Firstly, these entries are likely useful. Secondly, we will need to keep such entries to protect pending increment of xattr block refcount. CC: stable@xxxxxxxxxxxxxxx Fixes: 82939d7999df ("ext4: convert to mbcache2") Signed-off-by: Jan Kara <jack@xxxxxxx> Link: https://lore.kernel.org/r/20220712105436.32204-1-jack@xxxxxxx Signed-off-by: Theodore Ts'o <tytso@xxxxxxx> Stable-dep-of: a44e84a9b776 ("ext4: fix deadlock due to mbcache entry corruption") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/fs/mbcache.c b/fs/mbcache.c index 97c54d3a2227..cfc28129fb6f 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -288,7 +288,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, while (nr_to_scan-- && !list_empty(&cache->c_list)) { entry = list_first_entry(&cache->c_list, struct mb_cache_entry, e_list); - if (entry->e_referenced) { + if (entry->e_referenced || atomic_read(&entry->e_refcnt) > 2) { entry->e_referenced = 0; list_move_tail(&entry->e_list, &cache->c_list); continue; @@ -302,6 +302,14 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, spin_unlock(&cache->c_list_lock); head = mb_cache_entry_head(cache, entry->e_key); hlist_bl_lock(head); + /* Now a reliable check if the entry didn't get used... */ + if (atomic_read(&entry->e_refcnt) > 2) { + hlist_bl_unlock(head); + spin_lock(&cache->c_list_lock); + list_add_tail(&entry->e_list, &cache->c_list); + cache->c_entry_count++; + continue; + } if (!hlist_bl_unhashed(&entry->e_hash_list)) { hlist_bl_del_init(&entry->e_hash_list); atomic_dec(&entry->e_refcnt);