Re: [PATCH 1/3] writeback: pay attention to wbc->nr_to_write in write_cache_pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 10, 2010 at 08:58:04AM +1000, Dave Chinner wrote:
> On Wed, Jun 09, 2010 at 02:09:42PM -0700, Andrew Morton wrote:
> > On Wed,  9 Jun 2010 10:37:18 +1000
> > Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > 
> > > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > > 
> > > If a filesystem writes more than one page in ->writepage, write_cache_pages
> > > fails to notice this and continues to attempt writeback when wbc->nr_to_write
> > > has gone negative - this trace was captured from XFS:
> > > 
> > > 
> > >     wbc_writeback_start: towrt=1024
> > >     wbc_writepage: towrt=1024
> > >     wbc_writepage: towrt=0
> > >     wbc_writepage: towrt=-1
> > >     wbc_writepage: towrt=-5
> > >     wbc_writepage: towrt=-21
> > >     wbc_writepage: towrt=-85
> > > 
> > > This has adverse effects on filesystem writeback behaviour. write_cache_pages()
> > > needs to terminate after a certain number of pages are written, not after a
> > > certain number of calls to ->writepage are made.  This is a regression
> > > introduced by 17bc6c30cf6bfffd816bdc53682dd46fc34a2cf4 ("vfs: Add
> > > no_nrwrite_index_update writeback control flag"), but cannot be reverted
> > > directly due to subsequent bug fixes that have gone in on top of it.
> > 
> > Might be needed in -stable.  Unfortunately the most important piece of
> > information which is needed to make that decision was cunningly hidden
> > from us behind the vague-to-the-point-of-uselessness term "adverse
> > effects".
> > 
> > _what_ "adverse effects"??
> 
> Depends on how the specific filesystem handles a negative
> nr_to_write, doesn't it? I can't speak for the exact effect on
> anything other than XFS except to say that most ->write_page
> implemetnations don't handle the wbc->nr_to_write < 0 specifically...
> 
> For XFS, it results in increased CPU usage because it triggers
> page-at-a-time allocation (i.e no clustering), which increases
> overhead in the elveator due to merging requirements of single page
> bios and increased fragmentation due to small interleaved
> allocations on concurrent writeback workloads. Effectively it causes
> accelerated aging of XFS filesystems...

Sorry, forgot to address the -stable part of the question.

This series is dependent on the ext4 change to use it's own
writepage going into -stable first.  (i.e.
8e48dcfbd7c0892b4cfd064d682cc4c95a29df32 "ext4: Use our own
write_cache_pages()").

I'd suggest that all 4 patches (the ext4 patch and the three in this
series) should go back to 2.6.34-stable due to the long term affect
this writeback bug could have on XFS filesystems, and the sync
taking too long problem has been fairly widely reported since at
least .32...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux