Re: [PATCH 6/6] writeback: refill b_io iff empty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 06, 2011 at 12:37:08AM +0800, Jan Kara wrote:
> On Wed 04-05-11 15:39:31, Wu Fengguang wrote:
> > To help understand the behavior change, I wrote the writeback_queue_io
> > trace event, and found very different patterns between
> > - vanilla kernel
> > - this patchset plus the sync livelock fixes
> > 
> > Basically the vanilla kernel each time pulls a random number of inodes
> > from b_dirty, while the patched kernel tends to pull a fixed number of
> > inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
>   This regularity is really strange. Did you have a chance to look more into
> it? I find it highly unlikely that there would be exactly 1031 dirty inodes
> in b_dirty list every time you call move_expired_inodes()...

Yeah that's the weird point. The other things I noticed are more
regular "flusher - dd - flusher - dd - ..." writeout patterns after
the patches.  In vanilla kernel it behaves more randomly and there are
many balance_dirty_pages() IOs from tar.

I'll try to collect more traces in ext4 tomorrow. Sorry it's too late
for me now.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux