On Fri 23-08-24 21:07:30, Julian Sun wrote: > Hi, all > > Recently I noticed a bug[1] in btrfs, after digged it into > and I believe it'a race in vfs. > > Let's assume there's a inode (ie ino 261) with i_count 1 is > called by iput(), and there's a concurrent thread calling > generic_shutdown_super(). > > cpu0: cpu1: > iput() // i_count is 1 > ->spin_lock(inode) > ->dec i_count to 0 > ->iput_final() generic_shutdown_super() > ->__inode_add_lru() ->evict_inodes() > // cause some reason[2] ->if (atomic_read(inode->i_count)) continue; > // return before // inode 261 passed the above check > // list_lru_add_obj() // and then schedule out > ->spin_unlock() > // note here: the inode 261 > // was still at sb list and hash list, > // and I_FREEING|I_WILL_FREE was not been set > > btrfs_iget() > // after some function calls > ->find_inode() > // found the above inode 261 > ->spin_lock(inode) > // check I_FREEING|I_WILL_FREE > // and passed > ->__iget() > ->spin_unlock(inode) // schedule back > ->spin_lock(inode) > // check (I_NEW|I_FREEING|I_WILL_FREE) flags, > // passed and set I_FREEING > iput() ->spin_unlock(inode) > ->spin_lock(inode) ->evict() > // dec i_count to 0 > ->iput_final() > ->spin_unlock() > ->evict() > > Now, we have two threads simultaneously evicting > the same inode, which may trigger the BUG(inode->i_state & I_CLEAR) > statement both within clear_inode() and iput(). > > To fix the bug, recheck the inode->i_count after holding i_lock. > Because in the most scenarios, the first check is valid, and > the overhead of spin_lock() can be reduced. > > If there is any misunderstanding, please let me know, thanks. > > [1]: https://lore.kernel.org/linux-btrfs/000000000000eabe1d0619c48986@xxxxxxxxxx/ > [2]: The reason might be 1. SB_ACTIVE was removed or 2. mapping_shrinkable() > return false when I reproduced the bug. > > Reported-by: syzbot+67ba3c42bcbb4665d3ad@xxxxxxxxxxxxxxxxxxxxxxxxx > Closes: https://syzkaller.appspot.com/bug?extid=67ba3c42bcbb4665d3ad > CC: stable@xxxxxxxxxxxxxxx > Fixes: 63997e98a3be ("split invalidate_inodes()") > Signed-off-by: Julian Sun <sunjunchao2870@xxxxxxxxx> Thanks for the fix. It looks good to me and I'm curious how come we didn't hit this earlier ;). Anyway, feel free to add: Reviewed-by: Jan Kara <jack@xxxxxxx> I'd just note that the Fixes tag isn't correct AFAICT. At the time of commit 63997e98a3be we had a global inode_lock and the atomic_read(&inode->i_count) check was done under it. I'd say it was 55fa6091d831 ("fs: move i_sb_list out from under inode_lock") or possibly 250df6ed274d ("fs: protect inode->i_state with inode->i_lock") (depending on how you look at it) that introduced this problem. Not that it would matter that much these days :). Honza > --- > fs/inode.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/fs/inode.c b/fs/inode.c > index 3a41f83a4ba5..011f630777d0 100644 > --- a/fs/inode.c > +++ b/fs/inode.c > @@ -723,6 +723,10 @@ void evict_inodes(struct super_block *sb) > continue; > > spin_lock(&inode->i_lock); > + if (atomic_read(&inode->i_count)) { > + spin_unlock(&inode->i_lock); > + continue; > + } > if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) { > spin_unlock(&inode->i_lock); > continue; > -- > 2.39.2 > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR