On Mon, Feb 24, 2025 at 03:14:04PM +0100, Jan Kara wrote: > Hello! > > On Fri 21-02-25 13:13:15, Kalesh Singh via Lsf-pc wrote: > > Problem Statement > > =============== > > > > Readahead can result in unnecessary page cache pollution for mapped > > regions that are never accessed. Current mechanisms to disable > > readahead lack granularity and rather operate at the file or VMA > > level. This proposal seeks to initiate discussion at LSFMM to explore > > potential solutions for optimizing page cache/readahead behavior. > > > > > > Background > > ========= > > > > The read-ahead heuristics on file-backed memory mappings can > > inadvertently populate the page cache with pages corresponding to > > regions that user-space processes are known never to access e.g ELF > > LOAD segment padding regions. While these pages are ultimately > > reclaimable, their presence precipitates unnecessary I/O operations, > > particularly when a substantial quantity of such regions exists. > > > > Although the underlying file can be made sparse in these regions to > > mitigate I/O, readahead will still allocate discrete zero pages when > > populating the page cache within these ranges. These pages, while > > subject to reclaim, introduce additional churn to the LRU. This > > reclaim overhead is further exacerbated in filesystems that support > > "fault-around" semantics, that can populate the surrounding pages’ > > PTEs if found present in the page cache. > > > > While the memory impact may be negligible for large files containing a > > limited number of sparse regions, it becomes appreciable for many > > small mappings characterized by numerous holes. This scenario can > > arise from efforts to minimize vm_area_struct slab memory footprint. > > OK, I agree the behavior you describe exists. But do you have some > real-world numbers showing its extent? I'm not looking for some artificial > numbers - sure bad cases can be constructed - but how big practical problem > is this? If you can show that average Android phone has 10% of these > useless pages in memory than that's one thing and we should be looking for > some general solution. If it is more like 0.1%, then why bother? > > > Limitations of Existing Mechanisms > > =========================== > > > > fadvise(..., POSIX_FADV_RANDOM, ...): disables read-ahead for the > > entire file, rather than specific sub-regions. The offset and length > > parameters primarily serve the POSIX_FADV_WILLNEED [1] and > > POSIX_FADV_DONTNEED [2] cases. > > > > madvise(..., MADV_RANDOM, ...): Similarly, this applies on the entire > > VMA, rather than specific sub-regions. [3] > > Guard Regions: While guard regions for file-backed VMAs circumvent > > fault-around concerns, the fundamental issue of unnecessary page cache > > population persists. [4] > > Somewhere else in the thread you complain about readahead extending past > the VMA. That's relatively easy to avoid at least for readahead triggered > from filemap_fault() (i.e., do_async_mmap_readahead() and > do_sync_mmap_readahead()). I agree we could do that and that seems as a > relatively uncontroversial change. Note that if someone accesses the file > through standard read(2) or write(2) syscall or through different memory > mapping, the limits won't apply but such combinations of access are not > that common anyway. Hm I'm not sure sure, map elf files with different mprotect(), or mprotect() different portions of a file and suddenly you lose all the readahead for the rest even though you're reading sequentially? What about shared libraries with r/o parts and exec parts? I think we'd really need to do some pretty careful checking to ensure this wouldn't break some real world use cases esp. if we really do mostly readahead data from page cache. > > Regarding controlling readahead for various portions of the file - I'm > skeptical. In my opinion it would require too much bookeeping on the kernel > side for such a niche usecache (but maybe your numbers will show it isn't > such a niche as I think :)). I can imagine you could just completely > turn off kernel readahead for the file and do your special readahead from > userspace - I think you could use either userfaultfd for triggering it or > new fanotify FAN_PREACCESS events. I'm opposed to anything that'll proliferate VMAs (and from what Kalesh says, he is too!) I don't really see how we could avoid having to do that for this kind of case, but I may be missing something... > > Honza > -- > Jan Kara <jack@xxxxxxxx> > SUSE Labs, CR