Re: [PATCH v6 1/2] sb: add a new writeback list for sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 20, 2016 at 02:26:26PM +0100, Jan Kara wrote:
> On Tue 19-01-16 12:59:12, Brian Foster wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > wait_sb_inodes() currently does a walk of all inodes in the
> > filesystem to find dirty one to wait on during sync. This is highly
> > inefficient and wastes a lot of CPU when there are lots of clean
> > cached inodes that we don't need to wait on.
> > 
> > To avoid this "all inode" walk, we need to track inodes that are
> > currently under writeback that we need to wait for. We do this by
> > adding inodes to a writeback list on the sb when the mapping is
> > first tagged as having pages under writeback. wait_sb_inodes() can
> > then walk this list of "inodes under IO" and wait specifically just
> > for the inodes that the current sync(2) needs to wait for.
> > 
> > Define a couple helpers to add/remove an inode from the writeback
> > list and call them when the overall mapping is tagged for or cleared
> > from writeback. Update wait_sb_inodes() to walk only the inodes
> > under writeback due to the sync.
> 
> The patch looks good.  Just one comment: This grows struct inode by two
> longs. Such a growth should be justified by measuring the improvements. So
> can you measure some numbers showing how much the patch helped? I think it
> would be interesting to see:
> 
> a) How much sync(2) speed has improved if there's not much to wait for.

Depends on the size of the inode cache when sync is run.  If it's
empty it's not noticable. When you have tens of millions of cached,
clean inodes the inode list traversal can takes tens of seconds.
This is the sort of problem Josef reported that FB were having...

> b) See whether parallel heavy stat(2) load which is rotating lots of inodes
> in inode cache sees some improvement when it doesn't have to contend with
> sync(2) on s_inode_list_lock. I believe Dave Chinner had some loads where
> the contention on s_inode_list_lock due to sync and rotation of inodes was
> pretty heavy.

Just my usual fsmark workloads - they have parallel find and
parallel ls -lR traversals over the created fileset. Even just
running sync during creation (because there are millions of cached
inodes, and ~250,000 inodes being instiated and reclaimed every
second) causes lock contention problems....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux