Re: [PATCH stable 4.9.y 4.14.y] random: use expired timer rather than wq for mixing fast pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 13, 2022 at 11:07:31AM -0600, Jason A. Donenfeld wrote:
> commit 748bc4dd9e663f23448d8ad7e58c011a67ea1eca upstream.
> 
> Previously, the fast pool was dumped into the main pool periodically in
> the fast pool's hard IRQ handler. This worked fine and there weren't
> problems with it, until RT came around. Since RT converts spinlocks into
> sleeping locks, problems cropped up. Rather than switching to raw
> spinlocks, the RT developers preferred we make the transformation from
> originally doing:
> 
>     do_some_stuff()
>     spin_lock()
>     do_some_other_stuff()
>     spin_unlock()
> 
> to doing:
> 
>     do_some_stuff()
>     queue_work_on(some_other_stuff_worker)
> 
> This is an ordinary pattern done all over the kernel. However, Sherry
> noticed a 10% performance regression in qperf TCP over a 40gbps
> InfiniBand card. Quoting her message:
> 
> > MT27500 Family [ConnectX-3] cards:
> > Infiniband device 'mlx4_0' port 1 status:
> > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> > base lid: 0x6
> > sm lid: 0x1
> > state: 4: ACTIVE
> > phys state: 5: LinkUp
> > rate: 40 Gb/sec (4X QDR)
> > link_layer: InfiniBand
> >
> > Cards are configured with IP addresses on private subnet for IPoIB
> > performance testing.
> > Regression identified in this bug is in TCP latency in this stack as reported
> > by qperf tcp_lat metric:
> >
> > We have one system listen as a qperf server:
> > [root@yourQperfServer ~]# qperf
> >
> > Have the other system connect to qperf server as a client (in this
> > case, it’s X7 server with Mellanox card):
> > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
> 
> Rather than incur the scheduling latency from queue_work_on, we can
> instead switch to running on the next timer tick, on the same core. This
> also batches things a bit more -- once per jiffy -- which is okay now
> that mix_interrupt_randomness() can credit multiple bits at once.
> 
> Reported-by: Sherry Yang <sherry.yang@xxxxxxxxxx>
> Tested-by: Paul Webb <paul.x.webb@xxxxxxxxxx>
> Cc: Sherry Yang <sherry.yang@xxxxxxxxxx>
> Cc: Phillip Goerl <phillip.goerl@xxxxxxxxxx>
> Cc: Jack Vogel <jack.vogel@xxxxxxxxxx>
> Cc: Nicky Veitch <nicky.veitch@xxxxxxxxxx>
> Cc: Colm Harrington <colm.harrington@xxxxxxxxxx>
> Cc: Ramanan Govindarajan <ramanan.govindarajan@xxxxxxxxxx>
> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> Cc: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
> Cc: Tejun Heo <tj@xxxxxxxxxx>
> Cc: Sultan Alsawaf <sultan@xxxxxxxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
> Signed-off-by: Jason A. Donenfeld <Jason@xxxxxxxxx>
> ---
>  drivers/char/random.c | 16 ++++++++++------
>  1 file changed, 10 insertions(+), 6 deletions(-)

That worked, thanks, now queued up.

greg k-h



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux