Re: [PATCH 11/12] vmscan: Write out dirty pages in batch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 14, 2010 at 09:15:15PM -0700, Andrew Morton wrote:
> On Tue, 15 Jun 2010 13:20:34 +1000 Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> 
> > On Mon, Jun 14, 2010 at 06:39:57PM -0700, Andrew Morton wrote:
> > > On Tue, 15 Jun 2010 10:39:43 +1000 Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > 
> > > > 
> > > > IOWs, IMO anywhere there is a context with significant queue of IO,
> > > > that's where we should be doing a better job of sorting before that
> > > > IO is dispatched to the lower layers. This is still no guarantee of
> > > > better IO (e.g. if the filesystem fragments the file) but it does
> > > > give the lower layers a far better chance at optimal allocation and
> > > > scheduling of IO...
> > > 
> > > None of what you said had much to do with what I said.
> > > 
> > > What you've described are implementation problems in the current block
> > > layer because it conflates "sorting" with "queueing".  I'm saying "fix
> > > that".
> > 
> > You can't sort until you've queued.
> 
> Yes you can.  That's exactly what you're recommending!

Umm, I suggested sorting a queue dirty pages that was build by
reclaim before dispatching them. How does that translate to
me recommending "sort before queuing"?

> Only you're
> recommending doing it at the wrong level.

If you feed a filesystem garbage IO, you'll get garbage performance
and there's nothing that a block layer sort queue can do to fix the
damage it does to both performance and filesystem fragmentation
levels. It's not just about IO issue - delayed allocation pretty
much requires writeback to be issuing well formed IOs to reap the
benefits it can provide....

> > > And...  sorting at the block layer will always be superior to sorting
> > > at the pagecache layer because the block layer sorts at the physical
> > > block level and can handle not-well-laid-out files and can sort and merge
> > > pages from different address_spaces.
> > 
> > Yes it, can do that. And it still does that even if the higher
> > layers sort their I/O dispatch better,
> > 
> > Filesystems try very hard to allocate adjacent logical offsets in a
> > file in adjacent physical blocks on disk - that's the whole point of
> > extent-indexed filesystems. Hence with modern filesystems there is
> > generally a direct correlation between the page {mapping,index}
> > tuple and the physical location of the mapped block.
> > 
> > i.e. there is generally zero physical correlation between pages in
> > different mappings, but there is a high physical correlation
> > between the index of pages on the same mapping.
> 
> Nope.  Large-number-of-small-files is a pretty common case.  If the fs
> doesn't handle that well (ie: by placing them nearby on disk), it's
> borked.

Filesystems already handle this case just fine as we see it from
writeback all the time. Untarring a kernel is a good example of
this...

I suggested sorting all the IO to be issued into per-mapping page
groups because:
	a) makes IO issued from reclaim look almost exactly the same
	   to the filesytem as if writeback is pushing out the IO.
	b) it looks to be a trivial addition to the new code.

To me that's a no-brainer.

> It would be interesting to code up a little test patch though, see if
> there's benefit to be had going down this path.

I doubt Mel's tests cases will show anything - they simply didn't
show enough IO issued from reclaim to make any difference.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux