The patch titled Subject: mm: replace xa_get_order with xas_get_order where appropriate has been added to the -mm mm-unstable branch. Its filename is mm-replace-xa_get_order-with-xas_get_order-where-appropriate.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-replace-xa_get_order-with-xas_get_order-where-appropriate.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Shakeel Butt <shakeel.butt@xxxxxxxxx> Subject: mm: replace xa_get_order with xas_get_order where appropriate Date: Fri, 6 Sep 2024 16:05:12 -0700 The tracing of invalidation and truncation operations on large files showed that xa_get_order() is among the top functions where kernel spends a lot of CPUs. xa_get_order() needs to traverse the tree to reach the right node for a given index and then extract the order of the entry. However it seems like at many places it is being called within an already happening tree traversal where there is no need to do another traversal. Just use xas_get_order() at those places. Link: https://lkml.kernel.org/r/20240906230512.124643-1-shakeel.butt@xxxxxxxxx Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Liam Howlett <liam.howlett@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/filemap.c | 6 +++--- mm/shmem.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/mm/filemap.c~mm-replace-xa_get_order-with-xas_get_order-where-appropriate +++ a/mm/filemap.c @@ -2112,7 +2112,7 @@ unsigned find_lock_entries(struct addres VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), folio); } else { - nr = 1 << xa_get_order(&mapping->i_pages, xas.xa_index); + nr = 1 << xas_get_order(&xas); base = xas.xa_index & ~(nr - 1); /* Omit order>0 value which begins before the start */ if (base < *start) @@ -3005,7 +3005,7 @@ unlock: static inline size_t seek_folio_size(struct xa_state *xas, struct folio *folio) { if (xa_is_value(folio)) - return PAGE_SIZE << xa_get_order(xas->xa, xas->xa_index); + return PAGE_SIZE << xas_get_order(xas); return folio_size(folio); } @@ -4301,7 +4301,7 @@ static void filemap_cachestat(struct add if (xas_retry(&xas, folio)) continue; - order = xa_get_order(xas.xa, xas.xa_index); + order = xas_get_order(&xas); nr_pages = 1 << order; folio_first_index = round_down(xas.xa_index, 1 << order); folio_last_index = folio_first_index + nr_pages - 1; --- a/mm/shmem.c~mm-replace-xa_get_order-with-xas_get_order-where-appropriate +++ a/mm/shmem.c @@ -890,7 +890,7 @@ unsigned long shmem_partial_swap_usage(s if (xas_retry(&xas, page)) continue; if (xa_is_value(page)) - swapped += 1 << xa_get_order(xas.xa, xas.xa_index); + swapped += 1 << xas_get_order(&xas); if (xas.xa_index == max) break; if (need_resched()) { _ Patches currently in -mm which might be from shakeel.butt@xxxxxxxxx are mm-replace-xa_get_order-with-xas_get_order-where-appropriate.patch