On Sat, Feb 22, 2025 at 09:36:48PM -0800, Kalesh Singh wrote: > On Sat, Feb 22, 2025 at 10:03 AM Kent Overstreet > <kent.overstreet@xxxxxxxxx> wrote: > > > > On Fri, Feb 21, 2025 at 01:13:15PM -0800, Kalesh Singh wrote: > > > Hi organizers of LSF/MM, > > > > > > I realize this is a late submission, but I was hoping there might > > > still be a chance to have this topic considered for discussion. > > > > > > Problem Statement > > > =============== > > > > > > Readahead can result in unnecessary page cache pollution for mapped > > > regions that are never accessed. Current mechanisms to disable > > > readahead lack granularity and rather operate at the file or VMA > > > level. This proposal seeks to initiate discussion at LSFMM to explore > > > potential solutions for optimizing page cache/readahead behavior. > > > > > > > > > Background > > > ========= > > > > > > The read-ahead heuristics on file-backed memory mappings can > > > inadvertently populate the page cache with pages corresponding to > > > regions that user-space processes are known never to access e.g ELF > > > LOAD segment padding regions. While these pages are ultimately > > > reclaimable, their presence precipitates unnecessary I/O operations, > > > particularly when a substantial quantity of such regions exists. > > > > > > Although the underlying file can be made sparse in these regions to > > > mitigate I/O, readahead will still allocate discrete zero pages when > > > populating the page cache within these ranges. These pages, while > > > subject to reclaim, introduce additional churn to the LRU. This > > > reclaim overhead is further exacerbated in filesystems that support > > > "fault-around" semantics, that can populate the surrounding pages’ > > > PTEs if found present in the page cache. One note - if you use guard regions, fault-around won't be performed on them ;) It seems strange to me sparse regions would place duplicate zeroed pages in the page cache... > > > > > > While the memory impact may be negligible for large files containing a > > > limited number of sparse regions, it becomes appreciable for many > > > small mappings characterized by numerous holes. This scenario can > > > arise from efforts to minimize vm_area_struct slab memory footprint. Presumably we're most concern with _synchronous_ readhead here? Because once you estabish PG_readhead markers to trigger subsequent asynchronous readahead, I don't think you can retain control. I go into that more below. > > > > > > Limitations of Existing Mechanisms > > > =========================== > > > > > > fadvise(..., POSIX_FADV_RANDOM, ...): disables read-ahead for the > > > entire file, rather than specific sub-regions. The offset and length > > > parameters primarily serve the POSIX_FADV_WILLNEED [1] and > > > POSIX_FADV_DONTNEED [2] cases. > > > > > > madvise(..., MADV_RANDOM, ...): Similarly, this applies on the entire > > > VMA, rather than specific sub-regions. [3] > > > Guard Regions: While guard regions for file-backed VMAs circumvent > > > fault-around concerns, the fundamental issue of unnecessary page cache > > > population persists. [4] Note, not for fault-around. But yes for readahead, unavoidably, as there is no metadata at VMA level (intentionally). > > > Hi Kent. Thanks for taking a look at this. > > > What if we introduced something like > > > > madvise(..., MADV_READAHEAD_BOUNDARY, offset) > > > > Would that be sufficient? And would a single readahead boundary offset > > suffice? > > I like the idea of having boundaries. In this particular example the > single boundary suffices, though I think we’ll need to support > multiple (see below). > > One requirement that we’d like to meet is that the solution doesn’t > cause VMA splits, to avoid additional slab usage, so perhaps fadvise() > is better suited to this? +1 to not causing VMA splits, but presumably you'd madvise() the whole VMA anyway to adopt to this boundary mode? But if you're trying to do something sub-VMA, I mean I'm not sure there's any way for you to do this without splitting the VMA? You end up in the same situation as guard regions which is - how do we encode this information in such a way as to _not_ require VMA splitting, and for guard regions the answer is 'we encode it in the page tables, and modify _fault_ behaviour'. Obviously that won't work here, so you really have nowhere else to put it. While readahead state is stored in struct file(->f_ra) [which is somewhat iffy on a few levels but still], fundamentally for asynchronous > > Another behavior of “mmap readahead” is that it doesn’t really respect > VMA (start, end) boundaries: Right, but doesn't readahead strictly belong to the file/folios rather than any specific mapping? Fine for synchronous readahead potentially, as you could say - ok we're major faulting, only bring in up to the VMA boundary. But once you plant PG_readahead markers to trigger asynchronous readahead on minor faults and you're into filemap_readahead(), you lose all this kind of context. And is it really fair if you have multiple mappings as well as potentially read() operations on a file? I'm not sure how feasible it is to restrict beyond _initial synchronous_ readahead, and I think you could only do that with VMA metadata, and so you'd split the VMA, and wouldn't this defeat the purpose somewhat? > > The below demonstrates readahead past the end of the mapped region of the file: > > sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && > ./pollute_page_cache.sh > > Creating sparse file of size 25 pages > Apparent Size: 100K > Real Size: 0 > Number cached pages: 0 > Reading first 5 pages via mmap... > Mapping and reading pages: [0, 6) of file 'myfile.txt' > Number cached pages: 25 > > Similarly the readahead can bring in pages before the start of the > mapped region. I believe this is due to mmap “read-around” [6]: > > sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && > ./pollute_page_cache.sh > > Creating sparse file of size 25 pages > Apparent Size: 100K > Real Size: 0 > Number cached pages: 0 > Reading last 5 pages via mmap... > Mapping and reading pages: [20, 25) of file 'myfile.txt' > Number cached pages: 25 > > I’m not sure what the historical use cases for readahead past the VMA > boundaries are; but at least in some scenarios this behavior is not > desirable. For instance, many apps mmap uncompressed ELF files > directly from a page-aligned offset within a zipped APK as a space > saving and security feature. The read ahead and read around behaviors > lead to unrelated resources from the zipped APK populated in the page > cache. I think in this case we’ll need to have more than a single > boundary per file. > > A somewhat related but separate issue is that currently distinct pages > are allocated in the page cache when reading sparse file holes. I > think at least in the case of reading this should be avoidable. This does seem like something that could be improved, seems very strange we do this though. > > sudo sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && > ./pollute_page_cache.sh > > Creating sparse file of size 1GB > Apparent Size: 977M > Real Size: 0 > Number cached pages: 0 > Meminfo Cached: 9078768 kB > Reading 1GB of holes... > Number cached pages: 250000 > Meminfo Cached: 10117324 kB > > (10117324-9078768)/4 = 259639 = ~250000 pages # (global counter = some noise) > > --Kalesh