Re: [Lsf-pc] [LSF/MM TOPIC] a few storage topics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wed, 2012-01-25 at 11:22 -0500, Loke, Chetan wrote:
> > If the reason for not setting a larger readahead value is just that it
> > might increase memory pressure and thus decrease performance, is it
> > possible to use a suitable metric from the VM in order to set the value
> > automatically according to circumstances?
> > 
> 
> How about tracking heuristics for 'read-hits from previous read-aheads'? If the hits are in acceptable range(user-configurable knob?) then keep seeking else back-off a little on the read-ahead?
> 
> > Steve.
> 
> Chetan Loke

I'd been wondering about something similar to that. The basic scheme
would be:

 - Set a page flag when readahead is performed
 - Clear the flag when the page is read (or on page fault for mmap)
(i.e. when it is first used after readahead)

Then when the VM scans for pages to eject from cache, check the flag and
keep an exponential average (probably on a per-cpu basis) of the rate at
which such flagged pages are ejected. That number can then be used to
reduce the max readahead value.

The questions are whether this would provide a fast enough reduction in
readahead size to avoid problems? and whether the extra complication is
worth it compared with using an overall metric for memory pressure?

There may well be better solutions though,

Steve.


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux