Re: [PATCH net v3] vmxnet3: Fix tx queue race condition with XDP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 31, 2025 at 09:53:41AM +0530, Sankararaman Jayaraman wrote:
> If XDP traffic runs on a CPU which is greater than or equal to
> the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
> always picks up queue 0 for transmission as it uses reciprocal scale
> instead of simple modulo operation.
> 
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
> returned queue without any locking which can lead to race conditions
> when multiple XDP xmits run in parallel on different CPU's.
> 
> This patch uses a simple module scheme when the current CPU equals or
> exceeds the number of Tx queues on the NIC. It also adds locking in
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
> 
> Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
> Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@xxxxxxxxxxxx>
> Signed-off-by: Ronak Doshi <ronak.doshi@xxxxxxxxxxxx>
> ---
> v3:
>   - In vmxnet3_xdp_xmit_frame(), use the irq version of spin lock 
>   - Fixed the ordering of local variables in vmxnet3_xdp_xmit()
> v2: https://lore.kernel.org/netdev/20250129181703.148027-1-sankararaman.jayaraman@xxxxxxxxxxxx/
>   - Retained the earlier copyright dates as it is a bug fix
>   - Used spin_lock()/spin_unlock() instead of spin_lock_irqsave()
> v1: https://lore.kernel.org/netdev/20250124090211.110328-1-sankararaman.jayaraman@xxxxxxxxxxxx/
> 
>  drivers/net/vmxnet3/vmxnet3_xdp.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)

Reviewed-by: Simon Horman <horms@xxxxxxxxxx>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux