Fwd: [PATCH 00/11] [RFC] 512K readahead size with thrashing safe readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FYI, wanted to get this on our radar... seems latest DM isn't allowing
RFC readahead code to set a sane readahead default for DM devices?
get_capacity() is returning 0 for DM devices (not just multipath).
Vivek did share that fdisk -l does show the proper capacity for the DM
device.

I haven't had a chance to look at the relevant code yet.

I've asked Vivek to cc dm-devel on any further messages he might send
in response to this thread.

---------- Forwarded message ----------
From: Vivek Goyal <vgoyal@xxxxxxxxxx>
Date: Wed, Feb 3, 2010 at 10:58 AM
Subject: Re: [PATCH 00/11] [RFC] 512K readahead size with thrashing
safe readahead
To: Wu Fengguang <fengguang.wu@xxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Jens Axboe
<jens.axboe@xxxxxxxxxx>, Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>,
Linux Memory Management List <linux-mm@xxxxxxxxx>,
"linux-fsdevel@xxxxxxxxxxxxxxx" <linux-fsdevel@xxxxxxxxxxxxxxx>, LKML
<linux-kernel@xxxxxxxxxxxxxxx>


On Wed, Feb 03, 2010 at 10:24:54AM -0500, Vivek Goyal wrote:
> On Wed, Feb 03, 2010 at 02:27:56PM +0800, Wu Fengguang wrote:
> > Vivek,
> >
> > On Wed, Feb 03, 2010 at 06:38:03AM +0800, Vivek Goyal wrote:
> > > On Tue, Feb 02, 2010 at 11:28:35PM +0800, Wu Fengguang wrote:
> > > > Andrew,
> > > >
> > > > This is to lift default readahead size to 512KB, which I believe yields
> > > > more I/O throughput without noticeably increasing I/O latency for today's HDD.
> > > >
> > >
> > > Hi Fengguang,
> > >
> > > I was doing a quick test with the patches. I was using fio to run some
> > > sequential reader threads. I have got one access to one Lun from an HP
> > > EVA. In my case it looks like with the patches throughput has come down.
> >
> > Thank you for the quick testing!
> >
> > This patchset does 3 things:
> >
> > 1) 512K readahead size
> > 2) new readahead algorithms
> > 3) new readahead tracing/stats interfaces
> >
> > (1) will impact performance, while (2) _might_ impact performance in
> > case of bugs.
> >
> > Would you kindly retest the patchset with readahead size manually set
> > to 128KB?  That would help identify the root cause of the performance
> > drop:
> >
> >         DEV=sda
> >         echo 128 > /sys/block/$DEV/queue/read_ahead_kb
> >
>
> I have got two paths to the HP EVA and got multipath device setup(dm-3). I
> noticed with vanilla kernel read_ahead_kb=128 after boot but with your patches
> applied it is set to 4. So looks like something went wrong with device
> size/capacity detection hence wrong defaults. Manually setting
> read_ahead_kb=512, got me better performance as compare to vanilla kernel.
>

I put a printk in add_disk and noticed that for multipath device
get_capacity() is returning 0 and that's why ra_pages is being set to
1.

Thanks
Vivek

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux