On Wed, Mar 22, 2023 at 12:20:28PM +0000, Valentin Schneider wrote: > On 22/03/23 10:53, Peter Zijlstra wrote: > > Hurmph... so we only really consume @func when we IPI. Would it not be > > more useful to trace this thing for *every* csd enqeued? > > It's true that any CSD enqueued on that CPU's call_single_queue in the > [first CSD llist_add()'ed, IPI IRQ hits] timeframe is a potential source of > interference. > > However, can we be sure that first CSD isn't an indirect cause for the > following ones? say the target CPU exits RCU EQS due to the IPI, there's a > bit of time before it gets to flush_smp_call_function_queue() where some other CSD > could be enqueued *because* of that change in state. > > I couldn't find a easy example of that, I might be biased as this is where > I'd like to go wrt IPI'ing isolated CPUs in usermode. But regardless, when > correlating an IPI IRQ with its source, we'd always have to look at the > first CSD in that CSD stack. So I was thinking something like this: --- Subject: trace,smp: Trace all smp_function_call*() invocations From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Date: Wed Mar 22 14:58:36 CET 2023 (Ab)use the trace_ipi_send_cpu*() family to trace all smp_function_call*() invocations, not only those that result in an actual IPI. The queued entries log their callback function while the actual IPIs are traced on generic_smp_call_function_single_interrupt(). Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> --- kernel/smp.c | 58 ++++++++++++++++++++++++++++++---------------------------- 1 file changed, 30 insertions(+), 28 deletions(-) --- a/kernel/smp.c +++ b/kernel/smp.c @@ -106,18 +106,20 @@ void __init call_function_init(void) } static __always_inline void -send_call_function_single_ipi(int cpu, smp_call_func_t func) +send_call_function_single_ipi(int cpu) { if (call_function_single_prep_ipi(cpu)) { - trace_ipi_send_cpu(cpu, _RET_IP_, func); + trace_ipi_send_cpu(cpu, _RET_IP_, + generic_smp_call_function_single_interrupt); arch_send_call_function_single_ipi(cpu); } } static __always_inline void -send_call_function_ipi_mask(const struct cpumask *mask, smp_call_func_t func) +send_call_function_ipi_mask(const struct cpumask *mask) { - trace_ipi_send_cpumask(mask, _RET_IP_, func); + trace_ipi_send_cpumask(mask, _RET_IP_, + generic_smp_call_function_single_interrupt); arch_send_call_function_ipi_mask(mask); } @@ -318,25 +320,6 @@ static __always_inline void csd_unlock(s smp_store_release(&csd->node.u_flags, 0); } -static __always_inline void -raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func) -{ - /* - * The list addition should be visible to the target CPU when it pops - * the head of the list to pull the entry off it in the IPI handler - * because of normal cache coherency rules implied by the underlying - * llist ops. - * - * If IPIs can go out of order to the cache coherency protocol - * in an architecture, sufficient synchronisation should be added - * to arch code to make it appear to obey cache coherency WRT - * locking and barrier primitives. Generic code isn't really - * equipped to do the right thing... - */ - if (llist_add(node, &per_cpu(call_single_queue, cpu))) - send_call_function_single_ipi(cpu, func); -} - static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); void __smp_call_single_queue(int cpu, struct llist_node *node) @@ -356,10 +339,23 @@ void __smp_call_single_queue(int cpu, st func = CSD_TYPE(csd) == CSD_TYPE_TTWU ? sched_ttwu_pending : csd->func; - raw_smp_call_single_queue(cpu, node, func); - } else { - raw_smp_call_single_queue(cpu, node, NULL); + trace_ipi_send_cpu(cpu, _RET_IP_, func); } + + /* + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. + * + * If IPIs can go out of order to the cache coherency protocol + * in an architecture, sufficient synchronisation should be added + * to arch code to make it appear to obey cache coherency WRT + * locking and barrier primitives. Generic code isn't really + * equipped to do the right thing... + */ + if (llist_add(node, &per_cpu(call_single_queue, cpu))) + send_call_function_single_ipi(cpu); } /* @@ -798,14 +794,20 @@ static void smp_call_function_many_cond( } /* + * Trace each smp_function_call_*() as an IPI, actual IPIs + * will be traced with func==generic_smp_call_function_single_ipi(). + */ + trace_ipi_send_cpumask(cfd->cpumask_ipi, _RET_IP_, func); + + /* * Choose the most efficient way to send an IPI. Note that the * number of CPUs might be zero due to concurrent changes to the * provided mask. */ if (nr_cpus == 1) - send_call_function_single_ipi(last_cpu, func); + send_call_function_single_ipi(last_cpu); else if (likely(nr_cpus > 1)) - send_call_function_ipi_mask(cfd->cpumask_ipi, func); + send_call_function_ipi_mask(cfd->cpumask_ipi); } if (run_local && (!cond_func || cond_func(this_cpu, info))) {