On Thu, Dec 26, 2019 at 12:29:17PM +0000, Kevin Kou wrote: > The original patch bringed in the "SCTP ACK tracking trace event" > feature was committed at Dec.20, 2017, it replaced jprobe usage > with trace events, and bringed in two trace events, one is > TRACE_EVENT(sctp_probe), another one is TRACE_EVENT(sctp_probe_path). > The original patch intended to trigger the trace_sctp_probe_path in > TRACE_EVENT(sctp_probe) as below code, > > +TRACE_EVENT(sctp_probe, > + > + TP_PROTO(const struct sctp_endpoint *ep, > + const struct sctp_association *asoc, > + struct sctp_chunk *chunk), > + > + TP_ARGS(ep, asoc, chunk), > + > + TP_STRUCT__entry( > + __field(__u64, asoc) > + __field(__u32, mark) > + __field(__u16, bind_port) > + __field(__u16, peer_port) > + __field(__u32, pathmtu) > + __field(__u32, rwnd) > + __field(__u16, unack_data) > + ), > + > + TP_fast_assign( > + struct sk_buff *skb = chunk->skb; > + > + __entry->asoc = (unsigned long)asoc; > + __entry->mark = skb->mark; > + __entry->bind_port = ep->base.bind_addr.port; > + __entry->peer_port = asoc->peer.port; > + __entry->pathmtu = asoc->pathmtu; > + __entry->rwnd = asoc->peer.rwnd; > + __entry->unack_data = asoc->unack_data; > + > + if (trace_sctp_probe_path_enabled()) { > + struct sctp_transport *sp; > + > + list_for_each_entry(sp, &asoc->peer.transport_addr_list, > + transports) { > + trace_sctp_probe_path(sp, asoc); > + } > + } > + ), > > But I found it did not work when I did testing, and trace_sctp_probe_path > had no output, I finally found that there is trace buffer lock > operation(trace_event_buffer_reserve) in include/trace/trace_events.h: > > static notrace void \ > trace_event_raw_event_##call(void *__data, proto) \ > { \ > struct trace_event_file *trace_file = __data; \ > struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\ > struct trace_event_buffer fbuffer; \ > struct trace_event_raw_##call *entry; \ > int __data_size; \ > \ > if (trace_trigger_soft_disabled(trace_file)) \ > return; \ > \ > __data_size = trace_event_get_offsets_##call(&__data_offsets, args); \ > \ > entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ > sizeof(*entry) + __data_size); \ > \ > if (!entry) \ > return; \ > \ > tstruct \ > \ > { assign; } \ > \ > trace_event_buffer_commit(&fbuffer); \ > } > > The reason caused no output of trace_sctp_probe_path is that > trace_sctp_probe_path written in TP_fast_assign part of > TRACE_EVENT(sctp_probe), and it will be placed( { assign; } ) after the > trace_event_buffer_reserve() when compiler expands Macro, > > entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ > sizeof(*entry) + __data_size); \ > \ > if (!entry) \ > return; \ > \ > tstruct \ > \ > { assign; } \ > > so trace_sctp_probe_path finally can not acquire trace_event_buffer > and return no output, that is to say the nest of tracepoint entry function > is not allowed. The function call flow is: > > trace_sctp_probe() > -> trace_event_raw_event_sctp_probe() > -> lock buffer > -> trace_sctp_probe_path() > -> trace_event_raw_event_sctp_probe_path() --nested > -> buffer has been locked and return no output. > > This patch is to remove trace_sctp_probe_path from the TP_fast_assign > part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, > and trigger sctp_probe_path_trace in sctp_outq_sack. > > After this patch, you can enable both events individually, > # cd /sys/kernel/debug/tracing > # echo 1 > events/sctp/sctp_probe/enable > # echo 1 > events/sctp/sctp_probe_path/enable > > Or, you can enable all the events under sctp. > > # echo 1 > events/sctp/enable > > Signed-off-by: Kevin Kou <qdkevin.kou@xxxxxxxxx> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@xxxxxxxxx>