When testing large folio support with XFS on our servers, we observed that only a few large folios are mapped when reading large files via mmap. After a thorough analysis, I identified it was caused by the `/sys/block/*/queue/read_ahead_kb` setting. On our test servers, this parameter is set to 128KB. After I tune it to 2MB, the large folio can work as expected. However, I believe the large folio behavior should not be dependent on the value of read_ahead_kb. It would be more robust if the kernel can automatically adopt to it. With /sys/block/*/queue/read_ahead_kb set to 128KB and performing a sequential read on a 1GB file using MADV_HUGEPAGE, the differences in /proc/meminfo are as follows: - before this patch FileHugePages: 18432 kB FilePmdMapped: 4096 kB - after this patch FileHugePages: 1067008 kB FilePmdMapped: 1048576 kB This shows that after applying the patch, the entire 1GB file is mapped to huge pages. The stable list is CCed, as without this patch, large folios don't function optimally in the readahead path. It's worth noting that if read_ahead_kb is set to a larger value that isn't aligned with huge page sizes (e.g., 4MB + 128KB), it may still fail to map to hugepages. Link: https://lkml.kernel.org/r/20241108141710.9721-1-laoar.shao@xxxxxxxxx Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings") Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx> Tested-by: kernel test robot <oliver.sang@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> --- mm/readahead.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) Changes: v2->v3: - Fix the softlockup reported by kernel test robot https://lore.kernel.org/linux-fsdevel/202411292300.61edbd37-lkp@xxxxxxxxx/ v1->v2: https://lore.kernel.org/linux-mm/20241108141710.9721-1-laoar.shao@xxxxxxxxx/ - Drop the alignment (Matthew) - Improve commit log (Andrew) RFC->v1: https://lore.kernel.org/linux-mm/20241106092114.8408-1-laoar.shao@xxxxxxxxx/ - Simplify the code as suggested by Matthew RFC: https://lore.kernel.org/linux-mm/20241104143015.34684-1-laoar.shao@xxxxxxxxx/ diff --git a/mm/readahead.c b/mm/readahead.c index 3dc6c7a128dd..1dc3cffd4843 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -642,7 +642,11 @@ void page_cache_async_ra(struct readahead_control *ractl, 1UL << order); if (index == expected) { ra->start += ra->size; - ra->size = get_next_ra_size(ra, max_pages); + /* + * In the case of MADV_HUGEPAGE, the actual size might exceed + * the readahead window. + */ + ra->size = max(ra->size, get_next_ra_size(ra, max_pages)); ra->async_size = ra->size; goto readit; } -- 2.43.5