Huge pages are detrimental for small file: they causes noticible overhead on both allocation performance and memory footprint. This patch aimed to address this issue by avoiding huge pages until file grown to size of huge page. This would cover most of the cases where huge pages causes regressions in performance. Couple notes: - if shmem_enabled is set to 'force', the limit is ignored. We still want to generate as many pages as possible for functional testing. - the limit doesn't affect khugepaged behaviour: it still can collapse pages based on its settings; Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> --- Documentation/vm/transhuge.txt | 3 +++ mm/shmem.c | 5 +++++ 2 files changed, 8 insertions(+) diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt index 2ec6adb5a4ce..d1889c7c8c46 100644 --- a/Documentation/vm/transhuge.txt +++ b/Documentation/vm/transhuge.txt @@ -238,6 +238,9 @@ values: - "force": Force the huge option on for all - very useful for testing; +To avoid overhead for small files, we don't allocate huge pages for a file +until it grows to size of huge pages. + == Need of application restart == The transparent_hugepage/enabled values and tmpfs mount option only affect diff --git a/mm/shmem.c b/mm/shmem.c index ad7813d73ea7..c7b3cb5aecdc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1692,6 +1692,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, goto alloc_huge; /* TODO: implement fadvise() hints */ goto alloc_nohuge; + case SHEME_HUGE_ALWAYS: + i_size = i_size_read(inode); + if (index < HPAGE_PMD_NR && i_size < HPAGE_PMD_SIZE) + goto alloc_nohuge; + break; } alloc_huge: -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>