Re: [RFC][PATCH 1/2] Add a super operation for writeback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 03-06-14 07:14:44, Christoph Hellwig wrote:
> On Tue, Jun 03, 2014 at 04:05:31PM +0200, Jan Kara wrote:
> > So we currently flush inodes in first dirtied first written back order when
> > superblock is not specified in writeback work. That completely ignores the
> > fact to which superblock inode belongs but I don't see per-sb fairness to
> > actually make any sense when
> > 1) flushing old data (to keep promise set in dirty_expire_centisecs)
> > 2) flushing data to reduce number of dirty pages
> > And these are really the only two cases where we don't do per-sb flushing.
> > 
> > Now when filesystems want to do something more clever (and I can see
> > reasons for that e.g. when journalling metadata, even more so when
> > journalling data) I agree we need to somehow implement the above two types
> > of writeback using per-sb flushing. Type 1) is actually pretty easy - just
> > tell each sb to writeback dirty data upto time T. Type 2) is more difficult
> > because that is more openended task - it seems similar to what shrinkers do
> > but that would require us to track per sb amount of dirty pages / inodes
> > and I'm not sure we want to add even more page counting statistics...
> > Especially since often bdi == fs. Thoughts?
> 
> Honestly I think doing per-bdi writeback has been a major mistake.  As
> you said it only even matters when we have filesystems on multiple
> partitions on a single device, and even then only in a simple setup,
> as soon as we use LVM or btrfs this sort of sharing stops to happen
> anyway.  I don't even see much of a benefit except that we prevent
> two flushing daemons to congest a single device for that special case
> of multiple filesystems on partitions of the same device, and that could
> be solved in other ways.
  So I agree per-bdi / per-sb matters only in simple setups but machines
with single rotating disk with several partitions and without LVM aren't
that rare AFAICT from my experience. And I agree we went for per-bdi
flushing to avoid two threads congesting a single device leading to
suboptimal IO patterns during background writeback.

So currently I'm convinced we want to go for per-sb dirty tracking. That
also makes some speedups in that code noticeably simpler. I'm not convinced
about the per-sb flushing thread - if we don't regress the multiple sb on
bdi case when we just let the threads from different superblocks contend
for IO, then that would be a natural thing to do. But once we have to
introduce some synchronization between threads to avoid regressions, I
think it might be easier to just stay with per-bdi thread which switches
between superblocks.

								Honza
-- 
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux