RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2012-01-25 at 16:40 +0000, Steven Whitehouse wrote:
> Hi,
> 
> On Wed, 2012-01-25 at 11:22 -0500, Loke, Chetan wrote:
> > > If the reason for not setting a larger readahead value is just that it
> > > might increase memory pressure and thus decrease performance, is it
> > > possible to use a suitable metric from the VM in order to set the value
> > > automatically according to circumstances?
> > > 
> > 
> > How about tracking heuristics for 'read-hits from previous read-aheads'? If the hits are in acceptable range(user-configurable knob?) then keep seeking else back-off a little on the read-ahead?
> > 
> > > Steve.
> > 
> > Chetan Loke
> 
> I'd been wondering about something similar to that. The basic scheme
> would be:
> 
>  - Set a page flag when readahead is performed
>  - Clear the flag when the page is read (or on page fault for mmap)
> (i.e. when it is first used after readahead)
> 
> Then when the VM scans for pages to eject from cache, check the flag and
> keep an exponential average (probably on a per-cpu basis) of the rate at
> which such flagged pages are ejected. That number can then be used to
> reduce the max readahead value.
> 
> The questions are whether this would provide a fast enough reduction in
> readahead size to avoid problems? and whether the extra complication is
> worth it compared with using an overall metric for memory pressure?
> 
> There may well be better solutions though,

So there are two separate problems mentioned here.  The first is to
ensure that readahead (RA) pages are treated as more disposable than
accessed pages under memory pressure and then to derive a statistic for
futile RA (those pages that were read in but never accessed).

The first sounds really like its an LRU thing rather than adding yet
another page flag.  We need a position in the LRU list for never
accessed ... that way they're first to be evicted as memory pressure
rises.

The second is you can derive this futile readahead statistic from the
LRU position of unaccessed pages ... you could keep this globally.

Now the problem: if you trash all unaccessed RA pages first, you end up
with the situation of say playing a movie under moderate memory pressure
that we do RA, then trash the RA page then have to re-read to display to
the user resulting in an undesirable uptick in read I/O.

Based on the above, it sounds like a better heuristic would be to evict
accessed clean pages at the top of the LRU list before unaccessed clean
pages because the expectation is that the unaccessed clean pages will be
accessed (that's after all, why we did the readahead).  As RA pages age
in the LRU list, they become candidates for being futile, since they've
been in memory for a while and no-one has accessed them, leading to the
conclusion that they aren't ever going to be read.

So I think futility is a measure of unaccessed aging, not necessarily of
ejection (which is a memory pressure response).

James


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux