On Wed, Mar 15, 2023 at 5:21 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote: > > From: Jason Xing <kernelxing@xxxxxxxxxxx> > > Sometimes we need to know which one of backlog queue can be exactly > long enough to cause some latency when debugging this part is needed. > Thus, we can then separate the display of both. > > Signed-off-by: Jason Xing <kernelxing@xxxxxxxxxxx> > Reviewed-by: Simon Horman <simon.horman@xxxxxxxxxxxx> I just noticed that the state of this patch is "Changes Requested" in the patchwork[1]. But I didn't see any feedback on this. Please let me know if someone is available and provide more suggestions which are appreciated. [1]: https://patchwork.kernel.org/project/netdevbpf/patch/20230315092041.35482-2-kerneljasonxing@xxxxxxxxx/ Thanks, Jason > --- > v4: > 1) avoid the inconsistency through caching variables suggested > by Eric. > Link: https://lore.kernel.org/lkml/20230314030532.9238-2-kerneljasonxing@xxxxxxxxx/ > 2) remove the unused function: softnet_backlog_len() > > v3: drop the comment suggested by Simon > Link: https://lore.kernel.org/lkml/20230314030532.9238-2-kerneljasonxing@xxxxxxxxx/ > > v2: keep the total len of backlog queues untouched as Eric said > Link: https://lore.kernel.org/lkml/20230311151756.83302-1-kerneljasonxing@xxxxxxxxx/ > --- > net/core/net-procfs.c | 18 +++++++++++++----- > 1 file changed, 13 insertions(+), 5 deletions(-) > > diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c > index 1ec23bf8b05c..09f7ed1a04e8 100644 > --- a/net/core/net-procfs.c > +++ b/net/core/net-procfs.c > @@ -115,10 +115,14 @@ static int dev_seq_show(struct seq_file *seq, void *v) > return 0; > } > > -static u32 softnet_backlog_len(struct softnet_data *sd) > +static u32 softnet_input_pkt_queue_len(struct softnet_data *sd) > { > - return skb_queue_len_lockless(&sd->input_pkt_queue) + > - skb_queue_len_lockless(&sd->process_queue); > + return skb_queue_len_lockless(&sd->input_pkt_queue); > +} > + > +static u32 softnet_process_queue_len(struct softnet_data *sd) > +{ > + return skb_queue_len_lockless(&sd->process_queue); > } > > static struct softnet_data *softnet_get_online(loff_t *pos) > @@ -152,6 +156,8 @@ static void softnet_seq_stop(struct seq_file *seq, void *v) > static int softnet_seq_show(struct seq_file *seq, void *v) > { > struct softnet_data *sd = v; > + u32 input_qlen = softnet_input_pkt_queue_len(sd); > + u32 process_qlen = softnet_process_queue_len(sd); > unsigned int flow_limit_count = 0; > > #ifdef CONFIG_NET_FLOW_LIMIT > @@ -169,12 +175,14 @@ static int softnet_seq_show(struct seq_file *seq, void *v) > * mapping the data a specific CPU > */ > seq_printf(seq, > - "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n", > + "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x " > + "%08x %08x\n", > sd->processed, sd->dropped, sd->time_squeeze, 0, > 0, 0, 0, 0, /* was fastroute */ > 0, /* was cpu_collision */ > sd->received_rps, flow_limit_count, > - softnet_backlog_len(sd), (int)seq->index); > + input_qlen + process_qlen, (int)seq->index, > + input_qlen, process_qlen); > return 0; > } > > -- > 2.37.3 >