We limit the range on split, so that we can just allocate (sibs + 1) nodes to meet the need. This means new order at most could be on the next level of old order. But current range check doesn't cover the case well. For example, if old order is (3 * XA_CHUNK_SHIFT), new order with XA_CHUNK_SHIFT could pass the check now. This means new order is on the second level of old order. This patch do the check on shift directly to make sure the range is in limit. Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> CC: Johannes Weiner <hannes@xxxxxxxxxxx> CC: Shakeel Butt <shakeelb@xxxxxxxxxx> CC: Muchun Song <songmuchun@xxxxxxxxxxxxx> CC: Vlastimil Babka <vbabka@xxxxxxx> --- lib/xarray.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/xarray.c b/lib/xarray.c index aa9dc9b9417f..2c13fd9a9cf2 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1019,10 +1019,11 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order, gfp_t gfp) { unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1; + unsigned int xa_shift = order - (order % XA_CHUNK_SHIFT); unsigned int mask = xas->xa_sibs; /* XXX: no support for splitting really large entries yet */ - if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT < order)) + if (WARN_ON(xas->xa_shift + XA_CHUNK_SHIFT < xa_shift)) goto nomem; if (xas->xa_shift + XA_CHUNK_SHIFT > order) return; -- 2.33.1