On Tue, Nov 29, 2022 at 04:58:21PM +0100, Uladzislau Rezki (Sony) wrote: > Currently a need_offload_krc() function requires the krcp->lock > to be held because krcp->head can not be checked concurrently. > > Fix it by updating the krcp->head using WRITE_ONCE() macro so > it becomes lock-free and safe for readers to see a valid data > without any locking. Don't we also need to use READ_ONCE() for the code loading this krcp->head pointer? Or do the remaining plain C-language accesses somehow avoid running concurrently with those new WRITE_ONCE() invocations? Thanx, Paul > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > --- > kernel/rcu/tree.c | 11 ++++------- > 1 file changed, 4 insertions(+), 7 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 445f8c11a9a3..c94c17194299 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -3058,7 +3058,7 @@ static void kfree_rcu_monitor(struct work_struct *work) > // objects queued on the linked list. > if (!krwp->head_free) { > krwp->head_free = krcp->head; > - krcp->head = NULL; > + WRITE_ONCE(krcp->head, NULL); > } > > WRITE_ONCE(krcp->count, 0); > @@ -3072,6 +3072,8 @@ static void kfree_rcu_monitor(struct work_struct *work) > } > } > > + raw_spin_unlock_irqrestore(&krcp->lock, flags); > + > // If there is nothing to detach, it means that our job is > // successfully done here. In case of having at least one > // of the channels that is still busy we should rearm the > @@ -3079,8 +3081,6 @@ static void kfree_rcu_monitor(struct work_struct *work) > // still in progress. > if (need_offload_krc(krcp)) > schedule_delayed_monitor_work(krcp); > - > - raw_spin_unlock_irqrestore(&krcp->lock, flags); > } > > static enum hrtimer_restart > @@ -3250,7 +3250,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) > > head->func = ptr; > head->next = krcp->head; > - krcp->head = head; > + WRITE_ONCE(krcp->head, head); > success = true; > } > > @@ -3327,15 +3327,12 @@ static struct shrinker kfree_rcu_shrinker = { > void __init kfree_rcu_scheduler_running(void) > { > int cpu; > - unsigned long flags; > > for_each_possible_cpu(cpu) { > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); > > - raw_spin_lock_irqsave(&krcp->lock, flags); > if (need_offload_krc(krcp)) > schedule_delayed_monitor_work(krcp); > - raw_spin_unlock_irqrestore(&krcp->lock, flags); > } > } > > -- > 2.30.2 >