Does have race between __mark_inode_dirty() and evict()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

Recently we run into a NULL pointer dereference panic in our internal 4.9 kernel
it panics because inode->i_wb has become zero in wbc_attach_and_unlock_inode(),
and by crash tools analysis, inode's dirtied_when is zero, but dirtied_time_when
is not zero, seems that this inode has been used after free. Looking into both
4.9 and upstream codes, seems that there maybe a race:

__mark_inode_dirty(...)
{
    spin_lock(&inode->i_lock);
    ...
    if (inode->i_state & I_FREEING)
        goto out_unlock_inode;
    ...
    if (!was_dirty) {
        struct bdi_writeback *wb;
        struct list_head *dirty_list;
        bool wakeup_bdi = false;

        wb = locked_inode_to_wb_and_lock_list(inode);
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
       this function will unlock inode->i_ilock firstly and then relock, but once the
inode->i_ilock is unlocked, evict() may run in, set I_FREEING flag, and free the inode,
and later locked_inode_to_wb_and_lock_list relocks inode->i_ilock again, but will not
check the I_FREEING flag again, so the use after free for this inode would happen.

I'm not familiar with vfs or cgroup writeback codes much, could you please confirm whether
this is an issue? Thanks.

Regards,
Xiaoguang Wang



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux