From: Jung-JaeJoon <rgbi3307@xxxxxxxxx> If there are not enough nodes, mas_node_count() set an error state via mas_set_err() and return control flow to the beginning. In the return flow, mas_nomem() checks the error status, allocates new nodes, and resumes execution again. In particular, if this happens in mas_split() in the slow_path section executed in mas_wr_modify(), unnecessary work is repeated, causing a slowdown in speed as below flow: _begin: mas_wr_modify() --> if (new_end >= mt_slots[wr_mas->type]) --> goto slow_path slow_path: --> mas_wr_bnode() --> mas_store_b_node() --> mas_commit_b_node() --> mas_split() --> mas_node_count() return to _begin But, in the above flow, if mas_node_count() is executed before entering slow_path, execution efficiency is improved by allocating nodes without entering slow_path repeatedly. Signed-off-by: JaeJoon Jung <rgbi3307@xxxxxxxxx> --- lib/maple_tree.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 2d7d27e6ae3c..b42a4e70d229 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4176,8 +4176,13 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas) * path. */ new_end = mas_wr_new_end(wr_mas); - if (new_end >= mt_slots[wr_mas->type]) + if (new_end >= mt_slots[wr_mas->type]) } + mas->depth = mas_mt_height(mas); + mas_node_count(mas, 1 + mas->depth * 2); + if (mas_is_err(mas)) + return; goto slow_path; + } /* Attempt to append */ if (mas_wr_append(wr_mas, new_end)) -- 2.17.1