Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Optimizing Page Cache Readahead Behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

On Fri 21-02-25 13:13:15, Kalesh Singh via Lsf-pc wrote:
> Problem Statement
> ===============
> 
> Readahead can result in unnecessary page cache pollution for mapped
> regions that are never accessed. Current mechanisms to disable
> readahead lack granularity and rather operate at the file or VMA
> level. This proposal seeks to initiate discussion at LSFMM to explore
> potential solutions for optimizing page cache/readahead behavior.
> 
> 
> Background
> =========
> 
> The read-ahead heuristics on file-backed memory mappings can
> inadvertently populate the page cache with pages corresponding to
> regions that user-space processes are known never to access e.g ELF
> LOAD segment padding regions. While these pages are ultimately
> reclaimable, their presence precipitates unnecessary I/O operations,
> particularly when a substantial quantity of such regions exists.
> 
> Although the underlying file can be made sparse in these regions to
> mitigate I/O, readahead will still allocate discrete zero pages when
> populating the page cache within these ranges. These pages, while
> subject to reclaim, introduce additional churn to the LRU. This
> reclaim overhead is further exacerbated in filesystems that support
> "fault-around" semantics, that can populate the surrounding pages’
> PTEs if found present in the page cache.
> 
> While the memory impact may be negligible for large files containing a
> limited number of sparse regions, it becomes appreciable for many
> small mappings characterized by numerous holes. This scenario can
> arise from efforts to minimize vm_area_struct slab memory footprint.

OK, I agree the behavior you describe exists. But do you have some
real-world numbers showing its extent? I'm not looking for some artificial
numbers - sure bad cases can be constructed - but how big practical problem
is this? If you can show that average Android phone has 10% of these
useless pages in memory than that's one thing and we should be looking for
some general solution. If it is more like 0.1%, then why bother?

> Limitations of Existing Mechanisms
> ===========================
> 
> fadvise(..., POSIX_FADV_RANDOM, ...): disables read-ahead for the
> entire file, rather than specific sub-regions. The offset and length
> parameters primarily serve the POSIX_FADV_WILLNEED [1] and
> POSIX_FADV_DONTNEED [2] cases.
> 
> madvise(..., MADV_RANDOM, ...): Similarly, this applies on the entire
> VMA, rather than specific sub-regions. [3]
> Guard Regions: While guard regions for file-backed VMAs circumvent
> fault-around concerns, the fundamental issue of unnecessary page cache
> population persists. [4]

Somewhere else in the thread you complain about readahead extending past
the VMA. That's relatively easy to avoid at least for readahead triggered
from filemap_fault() (i.e., do_async_mmap_readahead() and
do_sync_mmap_readahead()). I agree we could do that and that seems as a
relatively uncontroversial change. Note that if someone accesses the file
through standard read(2) or write(2) syscall or through different memory
mapping, the limits won't apply but such combinations of access are not
that common anyway.

Regarding controlling readahead for various portions of the file - I'm
skeptical. In my opinion it would require too much bookeeping on the kernel
side for such a niche usecache (but maybe your numbers will show it isn't
such a niche as I think :)). I can imagine you could just completely
turn off kernel readahead for the file and do your special readahead from
userspace - I think you could use either userfaultfd for triggering it or
new fanotify FAN_PREACCESS events.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux