On Wed 25-01-12 14:33:54, Steven Whitehouse wrote: > On Tue, 2012-01-24 at 23:15 -0700, Andreas Dilger wrote: > > On 2012-01-24, at 8:29 PM, Wu Fengguang wrote: > > > On Tue, Jan 24, 2012 at 09:39:36PM +0100, Jan Kara wrote: > > >> On Tue 24-01-12 15:13:40, Jeff Moyer wrote: > > >>>> Maybe 128 KB is a too small default these days but OTOH noone prevents you > > >>>> from raising it (e.g. SLES uses 1 MB as a default). > > >>> > > >>> For some reason, I thought it had been bumped to 512KB by default. Must > > >>> be that overactive imagination I have... Anyway, if all of the distros > > >>> start bumping the default, don't you think it's time to consider bumping > > >>> it upstream, too? I thought there was a lot of work put into not being > > >>> too aggressive on readahead, so the downside of having a larger > > >>> read_ahead_kb setting was fairly small. > > >> > > >> Yeah, I believe 512KB should be pretty safe these days except for > > >> embedded world. OTOH average desktop user doesn't really care so it's > > >> mostly servers with beefy storage that care... (note that I wrote we raised > > >> the read_ahead_kb for SLES but not for openSUSE or SLED (desktop enterprise > > >> distro)). > > > > > > Maybe we don't need to care much about the embedded world when raising > > > the default readahead size? Because even the current 128KB is too much > > > for them, and I see Android setting the readahead size to 4KB... > > > > > > Some time ago I posted a series for raising the default readahead size > > > to 512KB. But I'm open to use 1MB now (shall we vote on it?). > > > > I'm all in favour of 1MB (aligned) readahead. I think the embedded folks > > already set enough CONFIG opts that we could trigger on one of those > > (e.g. CONFIG_EMBEDDED) to avoid stepping on their toes. It would also be > > possible to trigger on the size of the device so that the 32MB USB stick > > doesn't sit busy for a minute with readahead that is useless. > > > > Cheers, Andreas > > > > If the reason for not setting a larger readahead value is just that it > might increase memory pressure and thus decrease performance, is it > possible to use a suitable metric from the VM in order to set the value > automatically according to circumstances? In theory yes. In practice - do you have such heuristic ;)? There are lot of factors and it's hard to quantify how increased cache pressure influences performance of a particular workload. We could introduce some adaptive logic but so far fixed upperbound worked OK. Honza -- Jan Kara <jack@xxxxxxx> SUSE Labs, CR -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel