RE: [PATCH for-next 02/17] IB/hfi1: TID RDMA flow allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Doug Ledford [mailto:dledford@xxxxxxxxxx]
> Sent: Monday, February 04, 2019 1:49 PM
> To: Dalessandro, Dennis <dennis.dalessandro@xxxxxxxxx>; jgg@xxxxxxxx
> Cc: Dixit, Ashutosh <ashutosh.dixit@xxxxxxxxx>; linux-rdma@xxxxxxxxxxxxxxx;
> Mitko Haralanov <mitko.haralanov@xxxxxxxxx>; Marciniszyn, Mike
> <mike.marciniszyn@xxxxxxxxx>; Wan, Kaike <kaike.wan@xxxxxxxxx>
> Subject: Re: [PATCH for-next 02/17] IB/hfi1: TID RDMA flow allocation
> 
> On Wed, 2019-01-23 at 19:29 -0800, Dennis Dalessandro wrote:
> > +/**
> > + * tid_rdma_trigger_resume - field a trigger work request
> > + * @work - the work item
> > + *
> > + * Complete the off qp trigger processing by directly
> > + * calling the progress routine.
> > + */
> > +static void tid_rdma_trigger_resume(struct work_struct *work) {
> > +       struct tid_rdma_qp_params *tr;
> > +       struct hfi1_qp_priv *priv;
> > +       struct rvt_qp *qp;
> > +
> > +       tr = container_of(work, struct tid_rdma_qp_params, trigger_work);
> > +       priv = container_of(tr, struct hfi1_qp_priv, tid_rdma);
> > +       qp = priv->owner;
> > +       spin_lock_irq(&qp->s_lock);
> > +       if (qp->s_flags & HFI1_S_WAIT_TID_SPACE) {
> > +               spin_unlock_irq(&qp->s_lock);
> > +               hfi1_do_send(priv->owner, true);
> > +       } else {
> > +               spin_unlock_irq(&qp->s_lock);
> > +       }
> 
> Why take the lock then drop it before calling hfi1_do_send?  I looked at
> hfi1_do_send (it's doc in the comments needs fixing BTW, you updated it to
> take a *qp instead of a *work without fixing the docs), 
Thank you for pointing it out.

>and I see why you
> have to drop the lock before calling it as it will take the lock itself.  But that
> raises the question of why you are taking the lock here?  Are you trying to
> flush any concurrent accessors out before you test?  If so, do you need the
> exp_lock too?  And even if you are flushing out concurrent access, what's to
> prevent a new concurrent access as soon as you drop the lock?
qp->s_flags is modified all over the hfi1 driver by both the sending and receiving sides and we generally access it (read or modify it) only after acquiring the qp->s_lock to prevent obtaining any incorrect state information.

Kaike
> 
> --
> Doug Ledford <dledford@xxxxxxxxxx>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux