The following commit has been merged into the smp/core branch of tip: Commit-ID: 253a0fb4c62827cdcaf43afcea5d675507eaf7a3 Gitweb: https://git.kernel.org/tip/253a0fb4c62827cdcaf43afcea5d675507eaf7a3 Author: Valentin Schneider <vschneid@xxxxxxxxxx> AuthorDate: Tue, 07 Mar 2023 14:35:57 Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx> CommitterDate: Fri, 24 Mar 2023 11:01:28 +01:00 smp: reword smp call IPI comment Accessing the call_single_queue hasn't involved a spinlock since 2014: 6897fc22ea01 ("kernel: use lockless list for smp_call_function_single") The llist operations (namely cmpxchg() and xchg()) provide similar ordering guarantees, update the comment to lessen confusion. Signed-off-by: Valentin Schneider <vschneid@xxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20230307143558.294354-7-vschneid@xxxxxxxxxx --- kernel/smp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 03e6d57..6bbfabb 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -312,9 +312,10 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); void __smp_call_single_queue(int cpu, struct llist_node *node) { /* - * The list addition should be visible before sending the IPI - * handler locks the list to pull the entry off it because of - * normal cache coherency rules implied by spinlocks. + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. * * If IPIs can go out of order to the cache coherency protocol * in an architecture, sufficient synchronisation should be added