Re: [RFC] IO scheduler based io controller (V5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 22, 2009 at 12:06:42PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@xxxxxxxxxx> writes:
> 
> > On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal@xxxxxxxxxx> writes:
> >> 
> >> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> >> * Vivek Goyal <vgoyal@xxxxxxxxxx> [2009-06-19 16:37:18]:
> >> >> 
> >> >> > 
> >> >> > Hi All,
> >> >> > 
> >> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> >> [snip]
> >> >> 
> >> >> > Testing
> >> >> > =======
> >> >> >
> >> >> 
> >> >> [snip]
> >> >> 
> >> >> I've not been reading through the discussions in complete detail, but
> >> >> I see no reference to async reads or aio. In the case of aio, aio
> >> >> presumes the context of the user space process. Could you elaborate on
> >> >> any testing you've done with these cases? 
> >> >> 
> >> >
> >> > Hi Balbir,
> >> >
> >> > So far I had not done any testing with AIO. I have done some just now.
> >> > Here are the results.
> >> >
> >> > Test1 (AIO reads)
> >> > ================
> >> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> >> > respectively. I am using cfq scheduler. Following are some lines from my test
> >> > script.
> >> >
> >> > ===================================================================
> >> > fio_args="--ioengine=libaio --rw=read --size=512M"
> >> 
> >> AIO doesn't make sense without O_DIRECT.
> >> 
> >
> > Ok, here are the read results with --direct=1 for reads. In previous posting,
> > writes were already direct.
> >
> > test1 statistics: time=8 16 20796   sectors=8 16 1049648
> > test2 statistics: time=8 16 10551   sectors=8 16 581160
> >
> >
> > Not sure why reads are so slow with --direct=1? In the previous test
> > (no direct IO), I had cleared the caches using
> > (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> > cache?
> 
> O_DIRECT bypasses the page cache, and hence the readahead code.  Try
> driving deeper queue depths and/or using larger I/O sizes.

Ok. Thanks. I tried increasing iodepth to 20 and it helped a lot.

test1 statistics: time=8 16 6672   sectors=8 16 1049656
test2 statistics: time=8 16 3508   sectors=8 16 583432

Thanks
Vivek

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux