On Mon, Jan 30, 2023 at 11:38:15PM -0800, John Hickey wrote: > In commit 'ixgbe: let the xdpdrv work with more than 64 cpus' > (4fe815850bdc), support was added to allow XDP programs to run on systems > with more than 64 CPUs by locking the XDP TX rings and indexing them > using cpu % 64 (IXGBE_MAX_XDP_QS). > > Upon trying this out patch via the Intel 5.18.6 out of tree driver > on a system with more than 64 cores, the kernel paniced with an > array-index-out-of-bounds at the return in ixgbe_determine_xdp_ring in > ixgbe.h, which means ixgbe_determine_xdp_q_idx was just returning the > cpu instead of cpu % IXGBE_MAX_XDP_QS. I'd like to ask you to include the splat you got in the commit message. > > I think this is how it happens: > > Upon loading the first XDP program on a system with more than 64 CPUs, > ixgbe_xdp_locking_key is incremented in ixgbe_xdp_setup. However, > immediately after this, the rings are reconfigured by ixgbe_setup_tc. > ixgbe_setup_tc calls ixgbe_clear_interrupt_scheme which calls > ixgbe_free_q_vectors which calls ixgbe_free_q_vector in a loop. > ixgbe_free_q_vector decrements ixgbe_xdp_locking_key once per call if > it is non-zero. Commenting out the decrement in ixgbe_free_q_vector > stopped my system from panicing. > > I suspect to make the original patch work, I would need to load an XDP > program and then replace it in order to get ixgbe_xdp_locking_key back > above 0 since ixgbe_setup_tc is only called when transitioning between > XDP and non-XDP ring configurations, while ixgbe_xdp_locking_key is > incremented every time ixgbe_xdp_setup is called. > > Also, ixgbe_setup_tc can be called via ethtool --set-channels, so this > becomes another path to decrement ixgbe_xdp_locking_key to 0 on systems > with greater than 64 CPUs. > > For this patch, I have changed static_branch_inc to static_branch_enable > in ixgbe_setup_xdp. We aren't counting references and I don't see any > reason to turn it off, since all the locking appears to be in the XDP_TX > path, which isn't run if a XDP program isn't loaded. > > Fixes: 4fe815850bdc ("ixgbe: let the xdpdrv work with more than 64 cpus") > Signed-off-by: John Hickey <jjh@xxxxxxxxxxxx> > --- > v1 -> v2: > Added Fixes and net tag. No code changes. > --- > drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 3 --- > drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- > 2 files changed, 1 insertion(+), 4 deletions(-) > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c > index f8156fe4b1dc..0ee943db3dc9 100644 > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c > @@ -1035,9 +1035,6 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx) > adapter->q_vector[v_idx] = NULL; > __netif_napi_del(&q_vector->napi); > > - if (static_key_enabled(&ixgbe_xdp_locking_key)) > - static_branch_dec(&ixgbe_xdp_locking_key); Yeah calling this per each qvector is *very* unbalanced approach whereas you bump it single time when loading xdp prog. > - > /* > * after a call to __netif_napi_del() napi may still be used and > * ixgbe_get_stats64() might access the rings on this vector, > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c > index ab8370c413f3..cd2fb72c67be 100644 > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c > @@ -10283,7 +10283,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) > if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2) > return -ENOMEM; > else if (nr_cpu_ids > IXGBE_MAX_XDP_QS) > - static_branch_inc(&ixgbe_xdp_locking_key); > + static_branch_enable(&ixgbe_xdp_locking_key); Now that you removed static_branch_dec you probably need a counter part (static_branch_disable) at appriopriate place. > > old_prog = xchg(&adapter->xdp_prog, prog); > need_reset = (!!prog != !!old_prog); > -- > 2.37.2 >