On 26 Feb 2025, at 2:11, Baolin Wang wrote: > Hi Zi, > > On 2025/2/19 07:50, Zi Yan wrote: >> A preparation patch for non-uniform folio split, which always split a >> folio into half iteratively, and minimal xarray entry split. >> >> Currently, xas_split_alloc() and xas_split() always split all slots from a >> multi-index entry. They cost the same number of xa_node as the >> to-be-split slots. For example, to split an order-9 entry, which takes >> 2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8 >> xa_node are needed. Instead xas_try_split() is intended to be used >> iteratively to split the order-9 entry into 2 order-8 entries, then split >> one order-8 entry, based on the given index, to 2 order-7 entries, ..., >> and split one order-1 entry to 2 order-0 entries. When splitting the >> order-6 entry and a new xa_node is needed, xas_try_split() will try to >> allocate one if possible. As a result, xas_try_split() would only need >> one xa_node instead of 8. >> >> When a new xa_node is needed during the split, xas_try_split() can try to >> allocate one but no more. -ENOMEM will be return if a node cannot be >> allocated. -EINVAL will be return if a sibling node is split or cascade >> split happens, where two or more new nodes are needed, and these are not >> supported by xas_try_split(). >> >> xas_split_alloc() and xas_split() split an order-9 to order-0: >> >> --------------------------------- >> | | | | | | | | | >> | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | >> | | | | | | | | | >> --------------------------------- >> | | | | >> ------- --- --- ------- >> | | ... | | >> V V V V >> ----------- ----------- ----------- ----------- >> | xa_node | | xa_node | ... | xa_node | | xa_node | >> ----------- ----------- ----------- ----------- >> >> xas_try_split() splits an order-9 to order-0: >> --------------------------------- >> | | | | | | | | | >> | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | >> | | | | | | | | | >> --------------------------------- >> | >> | >> V >> ----------- >> | xa_node | >> ----------- >> >> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> >> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> >> Cc: David Hildenbrand <david@xxxxxxxxxx> >> Cc: Hugh Dickins <hughd@xxxxxxxxxx> >> Cc: John Hubbard <jhubbard@xxxxxxxxxx> >> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> >> Cc: Kirill A. Shuemov <kirill.shutemov@xxxxxxxxxxxxxxx> >> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> >> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> >> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> >> Cc: Yang Shi <yang@xxxxxxxxxxxxxxxxxxxxxx> >> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> >> Cc: Zi Yan <ziy@xxxxxxxxxx> >> --- >> Documentation/core-api/xarray.rst | 14 ++- >> include/linux/xarray.h | 7 ++ >> lib/test_xarray.c | 47 ++++++++++ >> lib/xarray.c | 138 ++++++++++++++++++++++++++---- >> tools/testing/radix-tree/Makefile | 1 + >> 5 files changed, 190 insertions(+), 17 deletions(-) >> >> diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst >> index f6a3eef4fe7f..c6c91cbd0c3c 100644 >> --- a/Documentation/core-api/xarray.rst >> +++ b/Documentation/core-api/xarray.rst >> @@ -489,7 +489,19 @@ Storing ``NULL`` into any index of a multi-index entry will set the >> entry at every index to ``NULL`` and dissolve the tie. A multi-index >> entry can be split into entries occupying smaller ranges by calling >> xas_split_alloc() without the xa_lock held, followed by taking the lock >> -and calling xas_split(). >> +and calling xas_split() or calling xas_try_split() with xa_lock. The >> +difference between xas_split_alloc()+xas_split() and xas_try_alloc() is >> +that xas_split_alloc() + xas_split() split the entry from the original >> +order to the new order in one shot uniformly, whereas xas_try_split() >> +iteratively splits the entry containing the index non-uniformly. >> +For example, to split an order-9 entry, which takes 2^(9-6)=8 slots, >> +assuming ``XA_CHUNK_SHIFT`` is 6, xas_split_alloc() + xas_split() need >> +8 xa_node. xas_try_split() splits the order-9 entry into >> +2 order-8 entries, then split one order-8 entry, based on the given index, >> +to 2 order-7 entries, ..., and split one order-1 entry to 2 order-0 entries. >> +When splitting the order-6 entry and a new xa_node is needed, xas_try_split() >> +will try to allocate one if possible. As a result, xas_try_split() would only >> +need 1 xa_node instead of 8. >> Functions and structures >> ======================== >> diff --git a/include/linux/xarray.h b/include/linux/xarray.h >> index 0b618ec04115..9eb8c7425090 100644 >> --- a/include/linux/xarray.h >> +++ b/include/linux/xarray.h >> @@ -1555,6 +1555,8 @@ int xa_get_order(struct xarray *, unsigned long index); >> int xas_get_order(struct xa_state *xas); >> void xas_split(struct xa_state *, void *entry, unsigned int order); >> void xas_split_alloc(struct xa_state *, void *entry, unsigned int order, gfp_t); >> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order, >> + gfp_t gfp); >> #else >> static inline int xa_get_order(struct xarray *xa, unsigned long index) >> { >> @@ -1576,6 +1578,11 @@ static inline void xas_split_alloc(struct xa_state *xas, void *entry, >> unsigned int order, gfp_t gfp) >> { >> } >> + >> +static inline void xas_try_split(struct xa_state *xas, void *entry, >> + unsigned int order, gfp_t gfp) >> +{ >> +} >> #endif >> /** > > [snip] > >> diff --git a/lib/xarray.c b/lib/xarray.c >> index 116e9286c64e..b9a63d7fbd58 100644 >> --- a/lib/xarray.c >> +++ b/lib/xarray.c >> @@ -1007,6 +1007,31 @@ static void node_set_marks(struct xa_node *node, unsigned int offset, >> } >> } >> +static struct xa_node *__xas_alloc_node_for_split(struct xa_state *xas, >> + void *entry, gfp_t gfp) >> +{ >> + unsigned int i; >> + void *sibling = NULL; >> + struct xa_node *node; >> + unsigned int mask = xas->xa_sibs; >> + >> + node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); >> + if (!node) >> + return NULL; >> + node->array = xas->xa; >> + for (i = 0; i < XA_CHUNK_SIZE; i++) { >> + if ((i & mask) == 0) { >> + RCU_INIT_POINTER(node->slots[i], entry); >> + sibling = xa_mk_sibling(i); >> + } else { >> + RCU_INIT_POINTER(node->slots[i], sibling); >> + } >> + } >> + RCU_INIT_POINTER(node->parent, xas->xa_alloc); >> + >> + return node; >> +} >> + >> /** >> * xas_split_alloc() - Allocate memory for splitting an entry. >> * @xas: XArray operation state. >> @@ -1025,7 +1050,6 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order, >> gfp_t gfp) >> { >> unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1; >> - unsigned int mask = xas->xa_sibs; >> /* XXX: no support for splitting really large entries yet */ >> if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order)) >> @@ -1034,23 +1058,9 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order, >> return; >> do { >> - unsigned int i; >> - void *sibling = NULL; >> - struct xa_node *node; >> - >> - node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); >> + struct xa_node *node = __xas_alloc_node_for_split(xas, entry, gfp); >> if (!node) >> goto nomem; >> - node->array = xas->xa; >> - for (i = 0; i < XA_CHUNK_SIZE; i++) { >> - if ((i & mask) == 0) { >> - RCU_INIT_POINTER(node->slots[i], entry); >> - sibling = xa_mk_sibling(i); >> - } else { >> - RCU_INIT_POINTER(node->slots[i], sibling); >> - } >> - } >> - RCU_INIT_POINTER(node->parent, xas->xa_alloc); >> xas->xa_alloc = node; >> } while (sibs-- > 0); >> @@ -1122,6 +1132,102 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order) >> xas_update(xas, node); >> } >> EXPORT_SYMBOL_GPL(xas_split); >> + >> +/** >> + * xas_try_split() - Try to split a multi-index entry. >> + * @xas: XArray operation state. >> + * @entry: New entry to store in the array. >> + * @order: Current entry order. >> + * @gfp: Memory allocation flags. >> + * >> + * The size of the new entries is set in @xas. The value in @entry is >> + * copied to all the replacement entries. If and only if one xa_node needs to >> + * be allocated, the function will use @gfp to get one. If more xa_node are >> + * needed, the function gives EINVAL error. >> + * >> + * Context: Any context. The caller should hold the xa_lock. >> + */ >> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order, >> + gfp_t gfp) > > The xas_try_split() may sleep if ‘gfp’ flags permit while holding the xa_lock, which can cause issues. So can we add a check for the ‘gfp’ or only use GFP_NOWAIT? You mean only allow gfp to be GFP_NOWAIT or GFP_ATOMIC? Best Regards, Yan, Zi