On Tue, Feb 21, 2023 at 11:46 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote: > > On Tue, Feb 21, 2023 at 10:46 PM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > > > On Tue, 2023-02-21 at 21:39 +0800, Jason Xing wrote: > > > On Tue, Feb 21, 2023 at 8:27 PM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > > > > > > > On Tue, 2023-02-21 at 19:03 +0800, Jason Xing wrote: > > > > > From: Jason Xing <kernelxing@xxxxxxxxxxx> > > > > > > > > > > Quoting from the commit 7c80b038d23e ("net: fix sk_wmem_schedule() > > > > > and sk_rmem_schedule() errors"): > > > > > > > > > > "If sk->sk_forward_alloc is 150000, and we need to schedule 150001 bytes, > > > > > we want to allocate 1 byte more (rounded up to one page), > > > > > instead of 150001" > > > > > > > > I'm wondering if this would cause measurable (even small) performance > > > > regression? Specifically under high packet rate, with BH and user-space > > > > processing happening on different CPUs. > > > > > > > > Could you please provide the relevant performance figures? > > > > > > Sure, I've done some basic tests on my machine as below. > > > > > > Environment: 16 cpus, 60G memory > > > Server: run "iperf3 -s -p [port]" command and start 500 processes. > > > Client: run "iperf3 -u -c 127.0.0.1 -p [port]" command and start 500 processes. > > > > Just for the records, with the above command each process will send > > pkts at 1mbs - not very relevant performance wise. > > > > Instead you could do: > > > > > taskset 0x2 iperf -s & > > iperf -u -c 127.0.0.1 -b 0 -l 64 > > > > Thanks for your guidance. > > Here're some numbers according to what you suggested, which I tested > several times. > ----------|IFACE rxpck/s txpck/s rxkB/s txkB/s > Before: lo 411073.41 411073.41 36932.38 36932.38 > After: lo 410308.73 410308.73 36863.81 36863.81 > > Above is one of many results which does not mean that the original > code absolutely outperforms. > The output is not that constant and stable, I think. Today, I ran the same test on other servers, it looks the same as above. Those results fluctuate within ~2%. Oh, one more thing I forgot to say is the output of iperf itself which doesn't show any difference. Before: Bitrate is 211 - 212 Mbits/sec After: Bitrate is 211 - 212 Mbits/sec So this result is relatively constant especially if we keep running the test over 2 minutes. Jason > > Please help me review those numbers. > > > > > > In theory, I have no clue about why it could cause some regression? > > > Maybe the memory allocation is not that enough compared to the > > > original code? > > > > As Eric noted, for UDP traffic, due to the expected average packet > > size, sk_forward_alloc is touched quite frequently, both with and > > without this patch, so there is little chance it will have any > > performance impact. > > Well, I see. > > Thanks, > Jason > > > > > Cheers, > > > > Paolo > >