On Mon, Aug 12, 2024 at 05:59:21PM +0200, Larysa Zaremba wrote: > On Mon, Aug 12, 2024 at 03:03:19PM +0200, Maciej Fijalkowski wrote: > > On Wed, Jul 24, 2024 at 06:48:36PM +0200, Larysa Zaremba wrote: > > > Locking used in ice_qp_ena() and ice_qp_dis() does pretty much nothing, > > > because ICE_CFG_BUSY is a state flag that is supposed to be set in a PF > > > state, not VSI one. Therefore it does not protect the queue pair from > > > e.g. reset. > > > > > > Despite being useless, it still can deadlock the unfortunate functions that > > > have fell into the same ICE_CFG_BUSY-VSI trap. This happens if ice_qp_ena > > > returns an error. > > > > > > Remove ICE_CFG_BUSY locking from ice_qp_dis() and ice_qp_ena(). > > > > Why not just check the pf->state ? > > I would just cite Jakub: "you lose lockdep and all other infra normal mutex > would give you." [0] I was not sure why you're bringing up mutex here but I missed 2nd patch somehow :) let me start from the beginning. > > [0] https://lore.kernel.org/netdev/20240612140935.54981c49@xxxxxxxxxx/ > > > And address other broken callsites? > > Because the current state of sychronization does not allow me to assume this > would fix anything and testing all the places would be out of scope for theese > series. > > With Dawid's patch [1], a mutex for XDP and miscellaneous changes from these > series I think we would probably come pretty close being able to get rid of > ICE_CFG_BUSY at least when locking software resources. > > [1] > https://lore.kernel.org/netdev/20240812125009.62635-1-dawid.osuchowski@xxxxxxxxxxxxxxx/ > > > > > > > Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") > > > Reviewed-by: Wojciech Drewek <wojciech.drewek@xxxxxxxxx> > > > Signed-off-by: Larysa Zaremba <larysa.zaremba@xxxxxxxxx> > > > --- > > > drivers/net/ethernet/intel/ice/ice_xsk.c | 9 --------- > > > 1 file changed, 9 deletions(-) > > > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c > > > index 5dd50a2866cc..d23fd4ea9129 100644 > > > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c > > > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c > > > @@ -163,7 +163,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) > > > struct ice_q_vector *q_vector; > > > struct ice_tx_ring *tx_ring; > > > struct ice_rx_ring *rx_ring; > > > - int timeout = 50; > > > int err; > > > > > > if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) > > > @@ -173,13 +172,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) > > > rx_ring = vsi->rx_rings[q_idx]; > > > q_vector = rx_ring->q_vector; > > > > > > - while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { > > > - timeout--; > > > - if (!timeout) > > > - return -EBUSY; > > > - usleep_range(1000, 2000); > > > - } > > > - > > > ice_qvec_dis_irq(vsi, rx_ring, q_vector); > > > ice_qvec_toggle_napi(vsi, q_vector, false); > > > > > > @@ -250,7 +242,6 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) > > > ice_qvec_ena_irq(vsi, q_vector); > > > > > > netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); > > > - clear_bit(ICE_CFG_BUSY, vsi->state); > > > > > > return 0; > > > } > > > -- > > > 2.43.0 > > >