Re: [PATCH 6/6] writeback: refill b_io iff empty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 10, 2011 at 12:31:04PM +0800, Wu Fengguang wrote:
> On Fri, May 06, 2011 at 10:21:55PM +0800, Jan Kara wrote:
> > On Fri 06-05-11 13:29:55, Wu Fengguang wrote:
> > > On Fri, May 06, 2011 at 12:37:08AM +0800, Jan Kara wrote:
> > > > On Wed 04-05-11 15:39:31, Wu Fengguang wrote:
> > > > > To help understand the behavior change, I wrote the writeback_queue_io
> > > > > trace event, and found very different patterns between
> > > > > - vanilla kernel
> > > > > - this patchset plus the sync livelock fixes
> > > > > 
> > > > > Basically the vanilla kernel each time pulls a random number of inodes
> > > > > from b_dirty, while the patched kernel tends to pull a fixed number of
> > > > > inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
> > > >   This regularity is really strange. Did you have a chance to look more into
> > > > it? I find it highly unlikely that there would be exactly 1031 dirty inodes
> > > > in b_dirty list every time you call move_expired_inodes()...
> > > 
> > > Jan, I got some results for ext4. The total dd+tar+sync time is
> > > decreased from 177s to 167s. The other numbers are either raised or
> > > dropped.
> >   Nice, but what I was more curious about was to understand why you saw
> > enqueued=1031 all the time.
> 
> Maybe some unknown interactions with XFS? Attached is another trace
> with both writeback_single_inode and writeback_queue_io.

Perhaps because write throttling is limiting the number of files
being dirtied to match the number of files being cleaned? hence they
age at roughly the same rate as writeback is cleaning them?
Especially as most file are only a single page in size?

Or perhaps that is the rate at which IO completions are occurring
and updating the inode size and redirtying the inode? After all,
there are lots of inodes that are only state=I_DIRTY_SYNC and
wrote=0 in the traces around when it starts going to ~1000 inodes
per queue_io call....

Or maybe a combination of both?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux