On 22/03/23 10:53, Peter Zijlstra wrote: > On Tue, Mar 07, 2023 at 02:35:58PM +0000, Valentin Schneider wrote: > >> @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct __call_single_data *csd) >> smp_store_release(&csd->node.u_flags, 0); >> } >> >> +static __always_inline void >> +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func) >> +{ >> + /* >> + * The list addition should be visible to the target CPU when it pops >> + * the head of the list to pull the entry off it in the IPI handler >> + * because of normal cache coherency rules implied by the underlying >> + * llist ops. >> + * >> + * If IPIs can go out of order to the cache coherency protocol >> + * in an architecture, sufficient synchronisation should be added >> + * to arch code to make it appear to obey cache coherency WRT >> + * locking and barrier primitives. Generic code isn't really >> + * equipped to do the right thing... >> + */ >> + if (llist_add(node, &per_cpu(call_single_queue, cpu))) >> + send_call_function_single_ipi(cpu, func); >> +} >> + >> static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); >> >> void __smp_call_single_queue(int cpu, struct llist_node *node) >> @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) >> } >> } >> #endif >> /* >> + * We have to check the type of the CSD before queueing it, because >> + * once queued it can have its flags cleared by >> + * flush_smp_call_function_queue() >> + * even if we haven't sent the smp_call IPI yet (e.g. the stopper >> + * executes migration_cpu_stop() on the remote CPU). >> */ >> + if (trace_ipi_send_cpumask_enabled()) { >> + call_single_data_t *csd; >> + smp_call_func_t func; >> + >> + csd = container_of(node, call_single_data_t, node.llist); >> + func = CSD_TYPE(csd) == CSD_TYPE_TTWU ? >> + sched_ttwu_pending : csd->func; >> + >> + raw_smp_call_single_queue(cpu, node, func); >> + } else { >> + raw_smp_call_single_queue(cpu, node, NULL); >> + } >> } > > Hurmph... so we only really consume @func when we IPI. Would it not be > more useful to trace this thing for *every* csd enqeued? It's true that any CSD enqueued on that CPU's call_single_queue in the [first CSD llist_add()'ed, IPI IRQ hits] timeframe is a potential source of interference. However, can we be sure that first CSD isn't an indirect cause for the following ones? say the target CPU exits RCU EQS due to the IPI, there's a bit of time before it gets to flush_smp_call_function_queue() where some other CSD could be enqueued *because* of that change in state. I couldn't find a easy example of that, I might be biased as this is where I'd like to go wrt IPI'ing isolated CPUs in usermode. But regardless, when correlating an IPI IRQ with its source, we'd always have to look at the first CSD in that CSD stack.