On Thu, Feb 23, 2023 at 4:39 PM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > On Wed, 2023-02-22 at 11:47 +0800, Jason Xing wrote: > > On Tue, Feb 21, 2023 at 11:46 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote: > > > > > > On Tue, Feb 21, 2023 at 10:46 PM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > > > > > > > On Tue, 2023-02-21 at 21:39 +0800, Jason Xing wrote: > > > > > On Tue, Feb 21, 2023 at 8:27 PM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > > > > > > > > > > > On Tue, 2023-02-21 at 19:03 +0800, Jason Xing wrote: > > > > > > > From: Jason Xing <kernelxing@xxxxxxxxxxx> > > > > > > > > > > > > > > Quoting from the commit 7c80b038d23e ("net: fix sk_wmem_schedule() > > > > > > > and sk_rmem_schedule() errors"): > > > > > > > > > > > > > > "If sk->sk_forward_alloc is 150000, and we need to schedule 150001 bytes, > > > > > > > we want to allocate 1 byte more (rounded up to one page), > > > > > > > instead of 150001" > > > > > > > > > > > > I'm wondering if this would cause measurable (even small) performance > > > > > > regression? Specifically under high packet rate, with BH and user-space > > > > > > processing happening on different CPUs. > > > > > > > > > > > > Could you please provide the relevant performance figures? > > > > > > > > > > Sure, I've done some basic tests on my machine as below. > > > > > > > > > > Environment: 16 cpus, 60G memory > > > > > Server: run "iperf3 -s -p [port]" command and start 500 processes. > > > > > Client: run "iperf3 -u -c 127.0.0.1 -p [port]" command and start 500 processes. > > > > > > > > Just for the records, with the above command each process will send > > > > pkts at 1mbs - not very relevant performance wise. > > > > > > > > Instead you could do: > > > > > > > > > > > taskset 0x2 iperf -s & > > > > iperf -u -c 127.0.0.1 -b 0 -l 64 > > > > > > > > > > Thanks for your guidance. > > > > > > Here're some numbers according to what you suggested, which I tested > > > several times. > > > ----------|IFACE rxpck/s txpck/s rxkB/s txkB/s > > > Before: lo 411073.41 411073.41 36932.38 36932.38 > > > After: lo 410308.73 410308.73 36863.81 36863.81 > > > > > > Above is one of many results which does not mean that the original > > > code absolutely outperforms. > > > The output is not that constant and stable, I think. > > > > Today, I ran the same test on other servers, it looks the same as > > above. Those results fluctuate within ~2%. > > > > Oh, one more thing I forgot to say is the output of iperf itself which > > doesn't show any difference. > > Before: Bitrate is 211 - 212 Mbits/sec > > After: Bitrate is 211 - 212 Mbits/sec > > So this result is relatively constant especially if we keep running > > the test over 2 minutes. > > Thanks for the testing. My personal take on this one is that is more a > refactor than a bug fix - as the amount forward allocated memory should > always be negligible for UDP. > > Still it could make sense keep the accounting schema consistent across > different protocols. I suggest to repost for net-next, when it will re- > open, additionally introducing __sk_mem_schedule() usage to avoid code > duplication. > Thanks for the review. I will replace this part with __sk_mem_schedule() and then repost it after Mar 6th. Thanks, Jason > Thanks, > > Paolo >