1.Removing redundant checks for current->mempolicy, with a more concise check order. 2.Using READ_ONCE(current->mempolicy) for safe, single access to current->mempolicy to prevent potential race conditions. 3.Optimizing the scope of task_lock(current). The lock now only protects the critical section where mempolicy is accessed, reducing the duration the lock is held. This enhances performance by limiting the scope of the lock to only what is necessary. Signed-off-by: Zhen Ni <zhen.ni@xxxxxxxxxxxx> --- mm/mempolicy.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b646fab3e45e..8bff8830b7e6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2132,11 +2132,14 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) { struct mempolicy *mempolicy; - if (!(mask && current->mempolicy)) + if (!mask) + return false; + + mempolicy = READ_ONCE(current->mempolicy); + if (!mempolicy) return false; task_lock(current); - mempolicy = current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED: case MPOL_PREFERRED_MANY: -- 2.20.1