RE: [PATCH RFC 0/4] restore polling to nvme-rdma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Hey Sagi,
> 
> Hi Steve,
> 
> > Is there no way to handle this in the core?  Maybe have the polling context
> > transition to DIRECT when the queue becomes empty and before re-arming
> the
> > CQ?
> 
> That is what I suggested, but that would mean that we we need to drain
> the cq before making the switch, which means we need to allocate a
> dedicated qp for that cq, and even that doesn't guarantee that the
> ULP is not posting other wrs on its own qp(s)...
> 
> So making this safe for infight I/O would be a challenge... If we end
> up agreeing that we are ok with this functionality, I'd much rather not
> deal with it and simply document "use with care".
> 
> > So ib_change_cq_ctx() would be called to indicate the change should
> > happen when it is safe to do so.
> 
> You lost me here... ib_change_cq_ctx would get called by who and when?

I didn't look in detail at your changes, but ib_change_cq_ctx() is called by the application, right?  I was just asking what if the semantics of the call were "change the context when it is safe to do so" vs "do it immediately and hope there are no outstanding WRs".   But I don't think this semantic change simplifies the problem.  






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux