The quilt patch titled Subject: mm/folio: replace set_compound_order with folio_set_order has been removed from the -mm tree. Its filename was mm-folio-replace-set_compound_order-with-folio_set_order.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Tarun Sahu <tsahu@xxxxxxxxxxxxx> Subject: mm/folio: replace set_compound_order with folio_set_order Date: Mon, 12 Jun 2023 15:05:14 +0530 The patch ("mm/folio: Avoid special handling for order value 0 in folio_set_order") [1] removed the need for special handling of order = 0 in folio_set_order. Now, folio_set_order and set_compound_order becomes similar function. This patch removes the set_compound_order and uses folio_set_order instead. [1] https://lore.kernel.org/all/20230609183032.13E08C433D2@xxxxxxxxxxxxxxx/ Link: https://lkml.kernel.org/r/20230612093514.689846-1-tsahu@xxxxxxxxxxxxx Signed-off-by: Tarun Sahu <tsahu@xxxxxxxxxxxxx> Reviewed-by Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx> Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 10 ---------- mm/internal.h | 32 ++++++++++++++++---------------- 2 files changed, 16 insertions(+), 26 deletions(-) --- a/include/linux/mm.h~mm-folio-replace-set_compound_order-with-folio_set_order +++ a/include/linux/mm.h @@ -1232,16 +1232,6 @@ static inline void folio_set_compound_dt void destroy_large_folio(struct folio *folio); -static inline void set_compound_order(struct page *page, unsigned int order) -{ - struct folio *folio = (struct folio *)page; - - folio->_folio_order = order; -#ifdef CONFIG_64BIT - folio->_folio_nr_pages = 1U << order; -#endif -} - /* Returns the number of bytes in this potentially compound page. */ static inline unsigned long page_size(struct page *page) { --- a/mm/internal.h~mm-folio-replace-set_compound_order-with-folio_set_order +++ a/mm/internal.h @@ -387,12 +387,27 @@ extern void memblock_free_pages(struct p unsigned int order); extern void __free_pages_core(struct page *page, unsigned int order); +/* + * This will have no effect, other than possibly generating a warning, if the + * caller passes in a non-large folio. + */ +static inline void folio_set_order(struct folio *folio, unsigned int order) +{ + if (WARN_ON_ONCE(!order || !folio_test_large(folio))) + return; + + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = 1U << order; +#endif +} + static inline void prep_compound_head(struct page *page, unsigned int order) { struct folio *folio = (struct folio *)page; folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); - set_compound_order(page, order); + folio_set_order(folio, order); atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); atomic_set(&folio->_pincount, 0); @@ -432,21 +447,6 @@ void memmap_init_range(unsigned long, in int split_free_page(struct page *free_page, unsigned int order, unsigned long split_pfn_offset); -/* - * This will have no effect, other than possibly generating a warning, if the - * caller passes in a non-large folio. - */ -static inline void folio_set_order(struct folio *folio, unsigned int order) -{ - if (WARN_ON_ONCE(!order || !folio_test_large(folio))) - return; - - folio->_folio_order = order; -#ifdef CONFIG_64BIT - folio->_folio_nr_pages = 1U << order; -#endif -} - #if defined CONFIG_COMPACTION || defined CONFIG_CMA /* _ Patches currently in -mm which might be from tsahu@xxxxxxxxxxxxx are