On Wed, Oct 12, 2011 at 05:53:59AM +0800, Jan Kara wrote: > On Tue 11-10-11 10:36:38, Wu Fengguang wrote: > > On Tue, Oct 11, 2011 at 07:30:07AM +0800, Jan Kara wrote: > > > On Mon 10-10-11 19:31:30, Wu Fengguang wrote: > > > > On Mon, Oct 10, 2011 at 07:21:33PM +0800, Jan Kara wrote: > > > > > Hi Fengguang, > > > > > > > > > > On Sat 08-10-11 12:00:36, Wu Fengguang wrote: > > > > > > The test results look not good: btrfs is heavily impacted and the > > > > > > other filesystems are slightly impacted. > > > > > > > > > > > > I'll send you the detailed logs in private emails (too large for the > > > > > > mailing list). Basically I noticed many writeback_wait traces that never > > > > > > appear w/o this patch. > > > > > OK, thanks for running these tests. I'll have a look at detailed logs. > > > > > I guess the difference can be caused by changes in redirty/requeue logic in > > > > > the second patch (the changes in the first patch could possibly make > > > > > several writeback_wait events from one event but never could introduce new > > > > > events). > > > > > > > > > > I guess I'll also try to reproduce the problem since it should be pretty > > > > > easy when you see such a huge regression even with 1 dd process on btrfs > > > > > filesystem. > > > > > > > > > > > In the btrfs cases that see larger regressions, I see large fluctuations > > > > > > in the writeout bandwidth and long disk idle periods. It's still a bit > > > > > > puzzling how all these happen.. > > > > > Yes, I don't understand it yet either... > > > > > > > > Jan, it's obviously caused by this chunk, which is not really > > > > necessary for fixing Christoph's problem. So the easy way is to go > > > > ahead without this chunk. > > > Yes, thanks a lot for debugging this! I'd still like to understand why > > > the hunk below is causing such a big problem to btrfs. I was looking into > > > the traces and all I could find so far was that for some reason relevant > > > inode (ino 257) was not getting queued for writeback for a long time (e.g. > > > 20 seconds) which introduced disk idle times and thus bad throughput. But I > > > don't understand why the inode was not queue for such a long time yet... > > > Today it's too late but I'll continue with my investigation tomorrow. > > > > Yeah, I have exactly the same observation and puzzle.. > OK, I dug more into this and I think I found an explanation. The problem > starts at > flush-btrfs-1-1336 [005] 20.688011: writeback_start: bdi btrfs-1: > sb_dev 0:0 nr_pages=23685 sync_mode=0 kupdate=1 range_cyclic=1 background=0 > reason=periodic > in the btrfs trace you sent me when we start "kupdate" style writeback > for bdi "btrfs-1". This work then blocks flusher thread upto moment: > flush-btrfs-1-1336 [007] 45.707479: writeback_start: bdi btrfs-1: > sb_dev 0:0 nr_pages=18173 sync_mode=0 kupdate=1 range_cyclic=1 background=0 > reason=periodic > flush-btrfs-1-1336 [007] 45.707479: writeback_written: bdi btrfs-1: > sb_dev 0:0 nr_pages=18173 sync_mode=0 kupdate=1 range_cyclic=1 background=0 > reason=periodic > > (i.e. for 25 seconds). The reason why this work blocks flusher thread for > so long is that btrfs has "btree inode" - essentially an inode holding > filesystem metadata and btrfs ignores any ->writepages() request for this > inode coming from kupdate style writeback. So we always try to write this > inode, make no progress, requeue inode (as it has still mapping tagged as > dirty), see that b_more_io is nonempty so we sleep for a while and then > retry. We do not include inode 257 with real dirty data into writeback > because this is kupdate style writeback and inode 257 does not have dirty > timestamp old enough. This loop would break either after 30s when inode > with data becomes old enough or - as we see above - at the moment when > btrfs decided to do transaction commit and cleaned metadata inode by it's > own methods. In either case this is far too late... Yes indeed. Good catch! The implication of this case is, never put an inode to b_more_io unless made some progress on cleaning some pages or the metadata. Failing to do so will lead to - busy looping (which can be fixed by patch 1/2 "writeback: Improve busyloop prevention") - block the current work (and in turn the other queued works) for long time, where the other pending works may tend to work on a different set of inodes or have different criteria for the FS to make progress. The existing examples are the for_kupdate test in btrfs and the SYNC vs ASYNC tests in general. And I'm planning to send writeback works from the vmscan code to write a specific inode.. In this sense, it looks not the right direction to convert the redirty_tail() calls to requeue_io(). If we change redirty_tail() to the earlier proposed requeue_io_wait(), all the known problems can be solved nicely. > So for now I don't see a better alternative than to revert to old > behavior in writeback_single_inode() as you suggested earlier. That way we > would redirty_tail() inodes which we cannot clean and thus they won't cause > livelocking of kupdate work. requeue_io_wait() can equally avoid touching inode->dirtied_when :) > Longer term we might want to be more clever in > switching away from kupdate style writeback to pure background writeback > but it's not yet clear to me what the logic should be so that we give > enough preference to old inodes... We'll need to adequately update older_than_this in the wb_writeback() loop for background work. Then we can make the switch. > New version of the second patch is attached. > > Honza > @@ -583,10 +597,10 @@ static long writeback_sb_inodes(struct super_block *sb, > wrote++; > if (wbc.pages_skipped) { > /* > - * writeback is not making progress due to locked > - * buffers. Skip this inode for now. > + * Writeback is not making progress due to unavailable > + * fs locks or similar condition. Retry in next round. > */ > - redirty_tail(inode, wb); > + requeue_io(inode, wb); > } > spin_unlock(&inode->i_lock); > spin_unlock(&wb->list_lock); In the case writeback_single_inode() just redirty_tail()ed the inode, it's not good to requeue_io() it here. So I'd suggest to keep the original code, or remove the if(pages_skipped){} block totally. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html