The quilt patch titled Subject: mm/mempolicy: refactor a read-once mechanism into a function for re-use has been removed from the -mm tree. Its filename was mm-mempolicy-refactor-a-read-once-mechanism-into-a-function-for-re-use.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Gregory Price <gourry.memverge@xxxxxxxxx> Subject: mm/mempolicy: refactor a read-once mechanism into a function for re-use Date: Thu, 25 Jan 2024 13:43:43 -0500 move the use of barrier() to force policy->nodemask onto the stack into a function `read_once_policy_nodemask` so that it may be re-used. Link: https://lkml.kernel.org/r/20240125184345.47074-3-gregory.price@xxxxxxxxxxxx Signed-off-by: Gregory Price <gregory.price@xxxxxxxxxxxx> Suggested-by: Huang Ying <ying.huang@xxxxxxxxx> Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Frank van der Linden <fvdl@xxxxxxxxxx> Cc: Hasan Al Maruf <Hasan.Maruf@xxxxxxx> Cc: Honggyu Kim <honggyu.kim@xxxxxx> Cc: Hyeongtak Ji <hyeongtak.ji@xxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Rakie Kim <rakie.kim@xxxxxx> Cc: Ravi Jonnalagadda <ravis.opensrc@xxxxxxxxxx> Cc: Srinivasulu Thanneeru <sthanneeru.opensrc@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mempolicy.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) --- a/mm/mempolicy.c~mm-mempolicy-refactor-a-read-once-mechanism-into-a-function-for-re-use +++ a/mm/mempolicy.c @@ -1890,6 +1890,20 @@ unsigned int mempolicy_slab_node(void) } } +static unsigned int read_once_policy_nodemask(struct mempolicy *pol, + nodemask_t *mask) +{ + /* + * barrier stabilizes the nodemask locally so that it can be iterated + * over safely without concern for changes. Allocators validate node + * selection does not violate mems_allowed, so this is safe. + */ + barrier(); + memcpy(mask, &pol->nodes, sizeof(nodemask_t)); + barrier(); + return nodes_weight(*mask); +} + /* * Do static interleaving for interleave index @ilx. Returns the ilx'th * node in pol->nodes (starting from ilx=0), wrapping around if ilx @@ -1897,20 +1911,12 @@ unsigned int mempolicy_slab_node(void) */ static unsigned int interleave_nid(struct mempolicy *pol, pgoff_t ilx) { - nodemask_t nodemask = pol->nodes; + nodemask_t nodemask; unsigned int target, nnodes; int i; int nid; - /* - * The barrier will stabilize the nodemask in a register or on - * the stack so that it will stop changing under the code. - * - * Between first_node() and next_node(), pol->nodes could be changed - * by other threads. So we put pol->nodes in a local stack. - */ - barrier(); - nnodes = nodes_weight(nodemask); + nnodes = read_once_policy_nodemask(pol, &nodemask); if (!nnodes) return numa_node_id(); target = ilx % nnodes; _ Patches currently in -mm which might be from gourry.memverge@xxxxxxxxx are mm-mempolicy-introduce-mpol_weighted_interleave-for-weighted-interleaving.patch mm-mempolicy-change-cur_il_weight-to-atomic-and-carry-the-node-with-it.patch