Re: [PATCH for-next 4/4] RDMA/efa: CQ notifications

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 05, 2021 at 02:05:15PM +0300, Gal Pressman wrote:
> On 05/09/2021 13:54, Leon Romanovsky wrote:
> > On Sun, Sep 05, 2021 at 01:45:41PM +0300, Gal Pressman wrote:
> >> On 05/09/2021 10:59, Leon Romanovsky wrote:
> >>> On Sun, Sep 05, 2021 at 10:25:17AM +0300, Gal Pressman wrote:
> >>>> On 02/09/2021 18:41, Jason Gunthorpe wrote:
> >>>>> On Thu, Sep 02, 2021 at 06:17:45PM +0300, Gal Pressman wrote:
> >>>>>> On 02/09/2021 18:10, Jason Gunthorpe wrote:
> >>>>>>> On Thu, Sep 02, 2021 at 06:09:39PM +0300, Gal Pressman wrote:
> >>>>>>>> On 02/09/2021 16:02, Jason Gunthorpe wrote:
> >>>>>>>>> On Thu, Sep 02, 2021 at 10:03:16AM +0300, Gal Pressman wrote:
> >>>>>>>>>> On 01/09/2021 18:36, Jason Gunthorpe wrote:
> >>>>>>>>>>> On Wed, Sep 01, 2021 at 05:24:43PM +0300, Gal Pressman wrote:
> >>>>>>>>>>>> On 01/09/2021 14:57, Jason Gunthorpe wrote:
> >>>>>>>>>>>>> On Wed, Sep 01, 2021 at 02:50:42PM +0300, Gal Pressman wrote:
> >>>>>>>>>>>>>> On 20/08/2021 21:27, Jason Gunthorpe wrote:
> >>>>>>>>>>>>>>> On Wed, Aug 11, 2021 at 06:11:31PM +0300, Gal Pressman wrote:
> >>>>>>>>>>>>>>>> diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
> >>>>>>>>>>>>>>>> index 417dea5f90cf..29db4dec02f0 100644
> >>>>>>>>>>>>>>>> +++ b/drivers/infiniband/hw/efa/efa_main.c
> >>>>>>>>>>>>>>>> @@ -67,6 +67,46 @@ static void efa_release_bars(struct efa_dev *dev, int bars_mask)
> >>>>>>>>>>>>>>>>      pci_release_selected_regions(pdev, release_bars);
> >>>>>>>>>>>>>>>>  }
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> +static void efa_process_comp_eqe(struct efa_dev *dev, struct efa_admin_eqe *eqe)
> >>>>>>>>>>>>>>>> +{
> >>>>>>>>>>>>>>>> +    u16 cqn = eqe->u.comp_event.cqn;
> >>>>>>>>>>>>>>>> +    struct efa_cq *cq;
> >>>>>>>>>>>>>>>> +
> >>>>>>>>>>>>>>>> +    cq = xa_load(&dev->cqs_xa, cqn);
> >>>>>>>>>>>>>>>> +    if (unlikely(!cq)) {
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> This seems unlikely to be correct, what prevents cq from being
> >>>>>>>>>>>>>>> destroyed concurrently?
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> A comp_handler cannot be running after cq destroy completes.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Sorry for the long turnaround, was OOO.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The CQ cannot be destroyed until all completion events are acked.
> >>>>>>>>>>>>>> https://github.com/linux-rdma/rdma-core/blob/7fd01f0c6799f0ecb99cae03c22cf7ff61ffbf5a/libibverbs/man/ibv_get_cq_event.3#L45
> >>>>>>>>>>>>>> https://github.com/linux-rdma/rdma-core/blob/7fd01f0c6799f0ecb99cae03c22cf7ff61ffbf5a/libibverbs/cmd_cq.c#L208
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> That is something quite different, and in userspace.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> What in the kernel prevents tha xa_load and the xa_erase from racing together?
> >>>>>>>>>>>>
> >>>>>>>>>>>> Good point.
> >>>>>>>>>>>> I think we need to surround efa_process_comp_eqe() with an rcu_read_lock() and
> >>>>>>>>>>>> have a synchronize_rcu() after removing it from the xarray in
> >>>>>>>>>>>> destroy_cq.
> >>>>>>>>>>>
> >>>>>>>>>>> Try to avoid synchronize_rcu()
> >>>>>>>>>>
> >>>>>>>>>> I don't see how that's possible?
> >>>>>>>>>
> >>>>>>>>> Usually people use call_rcu() instead
> >>>>>>>>
> >>>>>>>> Oh nice, thanks.
> >>>>>>>>
> >>>>>>>> I think the code would be much simpler using synchronize_rcu(), and the
> >>>>>>>> destroy_cq flow is usually on the cold path anyway. I also prefer to be certain
> >>>>>>>> that the CQ is freed once the destroy verb returns and not rely on the callback
> >>>>>>>> scheduling.
> >>>>>>>
> >>>>>>> I would not be happy to see synchronize_rcu on uverbs destroy
> >>>>>>> functions, it is too easy to DOS the kernel with that.
> >>>>>>
> >>>>>> OK, but isn't the fact that the uverb can return before the CQ is actually
> >>>>>> destroyed problematic?
> >>>>>
> >>>>> Yes, you can't allow that, something other than RCU needs to prevent
> >>>>> that
> >>>>>
> >>>>>> Maybe it's an extreme corner case, but if I created max_cq CQs, destroyed one,
> >>>>>> and try to create another one, it is not guaranteed that the create operation
> >>>>>> would succeed - even though the destroy has finished.
> >>>>>
> >>>>> More importantly a driver cannot call completion callbacks once
> >>>>> destroy cq has returned.
> >>>>
> >>>> So how is having some kind of synchronization to wait for the call_rcu()
> >>>> callback to finish different than using synchronize_rcu()? We'll have to wait
> >>>> for the readers to finish before returning.
> >>>
> >>> Why do you need to do anything special in addition to nullify
> >>> completion callback which will ensure that no new readers are
> >>> coming and call_rcu to make sure that existing readers finished?
> >>
> >> I ensure there are no new readers by removing the CQ from the xarray.
> >> Then I must wait for all existing readers before returning from efa_destroy_cq
> >> and freeing the cq struct (which is done by ib_core).
> > 
> > IB/core calls to rdma_restrack_del() which wait_for_completion() before
> > freeing CQ and returning to the users. You don't need to wait in
> > efa_destroy_cq().
> 
> The irq flow doesn't call rdma_restrack_get() so I'm not sure how the
> wait_for_completion() makes a difference here.
> And if it does then the code is fine as is? There's nothing the call_rcu() needs
> to do.

I can't say if it is needed or not, just wanted to understand why you need
complexity in destroy_cq path.

Thanks



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux