On Wed 15-06-16 13:35:10, Tahsin Erdogan wrote: > Asynchronous wb switching of inodes takes an additional ref count on an > inode to make sure inode remains valid until switchover is completed. > > However, anyone calling ihold() must already have a ref count on inode, > but in this case inode->i_count may already be zero: > > ------------[ cut here ]------------ > WARNING: CPU: 1 PID: 917 at fs/inode.c:397 ihold+0x2b/0x30 > CPU: 1 PID: 917 Comm: kworker/u4:5 Not tainted 4.7.0-rc2+ #49 > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs > 01/01/2011 > Workqueue: writeback wb_workfn (flush-8:16) > 0000000000000000 ffff88007ca0fb58 ffffffff805990af 0000000000000000 > 0000000000000000 ffff88007ca0fb98 ffffffff80268702 0000018d000004e2 > ffff88007cef40e8 ffff88007c9b89a8 ffff880079e3a740 0000000000000003 > Call Trace: > [<ffffffff805990af>] dump_stack+0x4d/0x6e > [<ffffffff80268702>] __warn+0xc2/0xe0 > [<ffffffff802687d8>] warn_slowpath_null+0x18/0x20 > [<ffffffff8035b4ab>] ihold+0x2b/0x30 > [<ffffffff80367ecc>] inode_switch_wbs+0x11c/0x180 > [<ffffffff80369110>] wbc_detach_inode+0x170/0x1a0 > [<ffffffff80369abc>] writeback_sb_inodes+0x21c/0x530 > [<ffffffff80369f7e>] wb_writeback+0xee/0x1e0 > [<ffffffff8036a147>] wb_workfn+0xd7/0x280 > [<ffffffff80287531>] ? try_to_wake_up+0x1b1/0x2b0 > [<ffffffff8027bb09>] process_one_work+0x129/0x300 > [<ffffffff8027be06>] worker_thread+0x126/0x480 > [<ffffffff8098cde7>] ? __schedule+0x1c7/0x561 > [<ffffffff8027bce0>] ? process_one_work+0x300/0x300 > [<ffffffff80280ff4>] kthread+0xc4/0xe0 > [<ffffffff80335578>] ? kfree+0xc8/0x100 > [<ffffffff809903cf>] ret_from_fork+0x1f/0x40 > [<ffffffff80280f30>] ? __kthread_parkme+0x70/0x70 > ---[ end trace aaefd2fd9f306bc4 ]--- > > Acked-by: Tejun Heo <tj@xxxxxxxxxx> > Signed-off-by: Tahsin Erdogan <tahsin@xxxxxxxxxx> Looks good. You can add: Reviewed-by: Jan Kara <jack@xxxxxxx> Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html