Re: [PATCHSET 1][PATCH 0/6] Filesystem AIO read/write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Suparna Bhattacharya wrote:
On Thu, Jan 04, 2007 at 05:50:11PM +1100, Nick Piggin wrote:

OK, but I think that after IO submission, you do not run sync_page to
unplug the block device, like the normal IO path would (via lock_page,
before the explicit plug patches).


In the buffered AIO case, we do run sync page like normal IO ... just
that we don't block in io_schedule(), everything else is pretty much similar.

You do? OK I must have misread it. Ignore that, then ;)

I'm sure more merging or batching could be done, but also consider that
most programs will not ever make use of any added complexity.


I guess I didn't express myself well - by batching I meant being able to
surround submission of a batch of iocbs with explicit plug/unplug instead
of explicit plug/unplug for each iocb separately. Of course there is no
easy way to do that, since at the io_submit() level we do not know about
the block device (each iocb could be directed to a different fd and not
just block devices). So it may not be worth thinking about.

Well we currently _could_ do that, because the block device plugging code
will detect if the request queue changes, and flush built up requests...

However, I think we may want to make the plug operations a callback rather
than hardcoded block device plugging, so that will make it harder... but
you have a good point about increasing the scope of the plugging, it would
be a win if we can do it.

Regarding your patches, I've just had a quick look and have a question --
what do you do about blocking in page reclaim and dirty balancing? Aren't
those major points of blocking with buffered IO? Did your test cases
dirty enough to start writeout or cause a lot of reclaim? (admittedly,
blocking in reclaim will now be much less common since the dirty mapping
accounting).


In my earlier versions of patches I actually had converted these waits to
be async retriable, but then I came to the conclusion that the additional
complexity wasn't worth it. For one it didn't seem to make a difference compared to the other bigger cases, and I was looking primarily at handling
the gross blocking points (say to enable an application to keep device queues
busy) and not making everything asynchronous; for another we had a long
discussion thread waay back about not making AIO submitters exempt from
throttling or memory availability waits.

OK, I was just curious. For keeping queues busy, your patchset should work
well (sleeping for more memory should be pretty uncommon). But for
overlapping computation with IO, it may not work so well if it encounters
throttling.

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com -
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux