Re: IO less throttling and cgroup aware writeback (Was: Re: [Lsf] Preliminary Agenda and Activities for LSF)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excerpts from Vivek Goyal's message of 2011-03-31 10:16:37 -0400:
> On Thu, Mar 31, 2011 at 09:20:02AM +1100, Dave Chinner wrote:
> 
> [..]
> > > It should not happen that flusher
> > > thread gets blocked somewhere (trying to get request descriptors on
> > > request queue)
> > 
> > A major design principle of the bdi-flusher threads is that they
> > are supposed to block when the request queue gets full - that's how
> > we got rid of all the congestion garbage from the writeback
> > stack.
> 
> Instead of blocking flusher threads, can they voluntarily stop submitting
> more IO when they realize too much IO is in progress. We aready keep
> stats of how much IO is under writeback on bdi (BDI_WRITEBACK) and
> flusher tread can use that?

We could, but the difficult part is keeping the hardware saturated as
requests complete.  The voluntarily stopping part is pretty much the
same thing the congestion code was trying to do.

> 
> Jens mentioned this idea of how about getting rid of this request accounting
> at request queue level and move it somewhere up say at bdi level.
> 
> > 
> > There are plans to move the bdi-flusher threads to work queues, and
> > once that is done all your concerns about blocking and parallelism
> > are pretty much gone because it's trivial to have multiple writeback
> > works in progress at once on the same bdi with that infrastructure.
> 
> Will this essentially not nullify the advantage of IO less throttling?
> I thought that we did not want have multiple threads doing writeback
> at the same time to avoid number of seeks and achieve better throughput.

Work queues alone are probably not appropriate, at least for spinning
storage.  It will introduce seeks into what would have been
sequential writes.  I had to make the btrfs worker thread pools after
having a lot of trouble cramming writeback into work queues.

> 
> Now with this I am assuming that multiple work can be on progress doing
> writeback. May be we can limit writeback work one per group so in global
> context only one work will be active.
> 
> > 
> > > or it tries to dispatch too much IO from an inode which
> > > primarily contains pages from low prio cgroup and high prio cgroup
> > > task does not get enough pages dispatched to device hence not getting
> > > any prio over low prio group.
> > 
> > That's a writeback scheduling issue independent of how we throttle,
> > and something we don't do at all right now. Our only decision on
> > what to write back is based on how low ago the inode was dirtied.
> > You need to completely rework the dirty inode tracking if you want
> > to efficiently prioritise writeback between different groups.
> > 
> > Given that filesystems don't all use the VFS dirty inode tracking
> > infrastructure and specific filesystems have different ideas of the
> > order of writeback, you've got a really difficult problem there.
> > e.g. ext3/4 and btrfs use ordered writeback for filesystem integrity
> > purposes which will completely screw any sort of prioritised
> > writeback. Remember the ext3 "fsync = global sync" latency problems?
> 
> Ok, so if one issues a fsync when filesystem is mounted in "data=ordered"
> mode we will flush all the writes to disk before committing meta data.
> 
> I have no knowledge of filesystem code so here comes a stupid question.
> Do multiple fsyncs get completely serialized or they can progress in
> parallel? IOW, if a fsync is in progress and we slow down the writeback
> of that inode's pages, can other fsync still make progress without
> getting stuck behind the previous fsync?

An fsync has two basic parts

1) write the file data pages
2a) flush data=ordered in reiserfs/ext34
2b) do the real transaction commit


We can do part one in parallel across any number of writers.  For part
two, there is only one running transaction.  If the FS is smart, the
commit will only force down the transaction that last modified the
file. 50 procs running fsync may only need to trigger one commit.

btrfs and xfs do data=ordered differently.  They still avoid exposing
stale data but we don't pull the plug on the whole bathtub for every
commit.  In the btrfs case, we don't update metadata until the data is
written, so commits never have to force data writes.  xfs does something
lighter weight but with similar benefits.

ext4 with delayed allocation on and data=ordered will only end up
forcing down writes that are not under delayed allocation.  This is a
much smaller subset of the IO than ext3/reiserfs will do.

> 
> For me knowing this is also important in another context of absolute IO
> throttling.
> 
> - If a fsync is in progress and gets throttled at device, what impact it
>   has on other file system operations. What gets serialized behind it. 

It depends.  atime updates log inodes and logging needs a transaction
and transactions sometimes need to wait for the last transaction to
finish.  So its very possible you'll make anything using the FS appear
to stop.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux