Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 29, 2009 at 06:55:21PM +0800, Vladislav Bolkhovitin wrote:
> 
> 
> Wu Fengguang, on 06/29/2009 01:34 PM wrote:
> > On Sat, Jun 20, 2009 at 08:29:31PM +0800, Vladislav Bolkhovitin wrote:
> >> Wu Fengguang, on 06/20/2009 07:55 AM wrote:
> >>> On Fri, Jun 19, 2009 at 03:04:36AM +0800, Andrew Morton wrote:
> >>>> On Sun, 7 Jun 2009 06:45:38 +0800
> >>>> Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
> >>>>
> >>>>>>> Do you have a place where the raw blktrace data can be retrieved for
> >>>>>>> more in-depth analysis?
> >>>>>> I think your comment is really adequate. In another thread, Wu Fengguang pointed
> >>>>>> out the same issue.
> >>>>>> I and Wu also wait his analysis.
> >>>>> And do it with a large readahead size :)
> >>>>>
> >>>>> Alan, this was my analysis:
> >>>>>
> >>>>> : Hifumi, can you help retest with some large readahead size?
> >>>>> :
> >>>>> : Your readahead size (128K) is smaller than your max_sectors_kb (256K),
> >>>>> : so two readahead IO requests get merged into one real IO, that means
> >>>>> : half of the readahead requests are delayed.
> >>>>>
> >>>>> ie. two readahead requests get merged and complete together, thus the effective
> >>>>> IO size is doubled but at the same time it becomes completely synchronous IO.
> >>>>>
> >>>>> :
> >>>>> : The IO completion size goes down from 512 to 256 sectors:
> >>>>> :
> >>>>> : before patch:
> >>>>> :   8,0    3   177955    50.050313976     0  C   R 8724991 + 512 [0]
> >>>>> :   8,0    3   177966    50.053380250     0  C   R 8725503 + 512 [0]
> >>>>> :   8,0    3   177977    50.056970395     0  C   R 8726015 + 512 [0]
> >>>>> :   8,0    3   177988    50.060326743     0  C   R 8726527 + 512 [0]
> >>>>> :   8,0    3   177999    50.063922341     0  C   R 8727039 + 512 [0]
> >>>>> :
> >>>>> : after patch:
> >>>>> :   8,0    3   257297    50.000760847     0  C   R 9480703 + 256 [0]
> >>>>> :   8,0    3   257306    50.003034240     0  C   R 9480959 + 256 [0]
> >>>>> :   8,0    3   257307    50.003076338     0  C   R 9481215 + 256 [0]
> >>>>> :   8,0    3   257323    50.004774693     0  C   R 9481471 + 256 [0]
> >>>>> :   8,0    3   257332    50.006865854     0  C   R 9481727 + 256 [0]
> >>>>>
> >>>> I haven't sent readahead-add-blk_run_backing_dev.patch in to Linus yet
> >>>> and it's looking like 2.6.32 material, if ever.
> >>>>
> >>>> If it turns out to be wonderful, we could always ask the -stable
> >>>> maintainers to put it in 2.6.x.y I guess.
> >>> Agreed. The expected (and interesting) test on a properly configured
> >>> HW RAID has not happened yet, hence the theory remains unsupported.
> >> Hmm, do you see anything improper in the Ronald's setup (see
> >> http://sourceforge.net/mailarchive/forum.php?thread_name=a0272b440906030714g67eabc5k8f847fb1e538cc62%40mail.gmail.com&forum_name=scst-devel)?
> >> It is HW RAID based.
> > 
> > No. Ronald's HW RAID performance is reasonably good.  I meant Hifumi's
> > RAID performance is too bad and may be improved by increasing the
> > readahead size, hehe.
> > 
> >> As I already wrote, we can ask Ronald to perform any needed tests.
> > 
> > Thanks!  Ronald's test results are:
> > 
> > 231   MB/s   HW RAID                        
> >  69.6 MB/s   HW RAID + SCST                 
> >  89.7 MB/s   HW RAID + SCST + this patch
> > 
> > So this patch seem to help SCST, but again it would be better to
> > improve the SCST throughput first - it is now quite sub-optimal.
> 
> No, SCST performance isn't an issue here. You simply can't get more than 
> 110 MB/s from iSCSI over 1GbE, hence 231 MB/s fundamentally isn't 
> possible. There is only room for 20% improvement, which should be 

Ah yes.

> achieved with better client-side-driven pipelining (see our other 
> discussions, e.g. http://lkml.org/lkml/2009/5/12/370)

Yeah, that's what I want to figure out why :)

Thanks,
Fengguang

> > (Sorry for the long delay: currently I have not got an idea on
> >  how to measure such timing issues.)
> > 
> > And if Ronald could provide the HW RAID performance with this patch,
> > then we can confirm if this patch really makes a difference for RAID.
> > 
> > Thanks,
> > Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux