bugzilla-daemon@xxxxxxxxxxxxxxxxxxx writes: Hi Jens, Just FYI, we have found a regression which was caused by your famous writeback patch 03ba3782e8dcc5b0e1efe440d33084f066e38cae I'm not allowed to add you to CC in BZ, that's why i wrote this mail. Before the patch __sync_filesystem() called writeback_single_inode() directly, but now it is called indirectly (from flush-X:X task) which require a super_block in question to be pinned. But this is impossible to pin this SB on umount because we already hold s_umount sem for write, so effectively we already pinned that SB. So my proposal is to treat umount similar to WB_SYNC_ALL, and skip pining stage. > https://bugzilla.kernel.org/show_bug.cgi?id=15906 > > > Dmitry Monakhov <dmonakhov@xxxxxxxxxx> changed: > > What |Removed |Added > ---------------------------------------------------------------------------- > CC| |dmonakhov@xxxxxxxxxx > > > > > --- Comment #13 from Dmitry Monakhov <dmonakhov@xxxxxxxxxx> 2010-05-05 07:28:10 --- > Yep. i've already know that issue. In fact it was broken by followng commit > > From 03ba3782e8dcc5b0e1efe440d33084f066e38cae Mon Sep 17 00:00:00 2001 > From: Jens Axboe <jens.axboe@xxxxxxxxxx> > Date: Wed, 9 Sep 2009 09:08:54 +0200 > Subject: [PATCH] writeback: switch to per-bdi threads for flushing data > > The problem with __sync_filesystem(0) is no longer works on umount > because sb can not be pined s_mount sem is downed for write and s_root is NULL. > > And in fact ext3 is also broken in case of "-obarrier=1" > The patch attached fix the original regression, but there is one more issue > left > > A delalloc option. In fact dirty inode is still dirty even after first > call of writeback_single_inode which is called from __sync_filesystem(0) > due to delalloc allocation happen during inode write. So it takes second > __sync_filesystem call to clear dirty flags. Currently i'm working on that > issue. I hope i'll post a solution today. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html