Re: [PATCH 00/11] [RFC] 512K readahead size with thrashing safe readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 03, 2010 at 02:27:56PM +0800, Wu Fengguang wrote:
> Vivek,
> 
> On Wed, Feb 03, 2010 at 06:38:03AM +0800, Vivek Goyal wrote:
> > On Tue, Feb 02, 2010 at 11:28:35PM +0800, Wu Fengguang wrote:
> > > Andrew,
> > > 
> > > This is to lift default readahead size to 512KB, which I believe yields
> > > more I/O throughput without noticeably increasing I/O latency for today's HDD.
> > > 
> > 
> > Hi Fengguang,
> > 
> > I was doing a quick test with the patches. I was using fio to run some
> > sequential reader threads. I have got one access to one Lun from an HP
> > EVA. In my case it looks like with the patches throughput has come down.
> 
> Thank you for the quick testing!
> 
> This patchset does 3 things:
> 
> 1) 512K readahead size
> 2) new readahead algorithms
> 3) new readahead tracing/stats interfaces
> 
> (1) will impact performance, while (2) _might_ impact performance in
> case of bugs.
> 
> Would you kindly retest the patchset with readahead size manually set
> to 128KB?  That would help identify the root cause of the performance
> drop:
> 
>         DEV=sda
>         echo 128 > /sys/block/$DEV/queue/read_ahead_kb
> 

I have got two paths to the HP EVA and got multipath device setup(dm-3). I
noticed with vanilla kernel read_ahead_kb=128 after boot but with your patches
applied it is set to 4. So looks like something went wrong with device
size/capacity detection hence wrong defaults. Manually setting
read_ahead_kb=512, got me better performance as compare to vanilla kernel.

AVERAGE[bsr]    
------- 
job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)    
---       --- --  ------------   -----------    -------------  -----------    
bsr       3   1   190302         97937.3        0              0              
bsr       3   2   185636         223286         0              0              
bsr       3   4   185986         363658         0              0              
bsr       3   8   184352         428478         0              0              
bsr       3   16  185646         594311         0              0              

Thanks
Vivek
 
> The readahead stats provided by the patchset are very useful for
> analyzing the problem:
> 
>         mount -t debugfs none /debug
>         
>         # for each benchmark:
>                 echo > /debug/readahead/stats  # reset counters
>                 # do benchmark
>                 cat /debug/readahead/stats     # check counters
> 
> Thanks,
> Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux