The quilt patch titled Subject: mm/huge_memory: skip invalid debugfs new_order input for folio split has been removed from the -mm tree. Its filename was mm-huge_memory-skip-invalid-debugfs-new_order-input-for-folio-split.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Zi Yan <ziy@xxxxxxxxxx> Subject: mm/huge_memory: skip invalid debugfs new_order input for folio split Date: Thu, 7 Mar 2024 13:18:54 -0500 User can put arbitrary new_order via debugfs for folio split test. Although new_order check is added to split_huge_page_to_list_order() in the prior commit, these two additional checks can avoid unnecessary folio locking and split_folio_to_order() calls. Link: https://lkml.kernel.org/r/20240307181854.138928-2-zi.yan@xxxxxxxx Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> Closes: https://lore.kernel.org/linux-mm/7dda9283-b437-4cf8-ab0d-83c330deb9c0@moroto.mountain/ Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 6 ++++++ 1 file changed, 6 insertions(+) --- a/mm/huge_memory.c~mm-huge_memory-skip-invalid-debugfs-new_order-input-for-folio-split +++ a/mm/huge_memory.c @@ -3486,6 +3486,9 @@ static int split_huge_pages_pid(int pid, if (!is_transparent_hugepage(folio)) goto next; + if (new_order >= folio_order(folio)) + goto next; + total++; /* * For folios with private, split_huge_page_to_list_to_order() @@ -3553,6 +3556,9 @@ static int split_huge_pages_in_file(cons total++; nr_pages = folio_nr_pages(folio); + if (new_order >= folio_order(folio)) + goto next; + if (!folio_trylock(folio)) goto next; _ Patches currently in -mm which might be from ziy@xxxxxxxxxx are