On Thu, Mar 5, 2020 at 2:54 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Mar 05, 2020 at 02:37:59PM +0100, Jinpu Wang wrote: > > On Thu, Mar 5, 2020 at 2:27 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > > > > > On Thu, Mar 05, 2020 at 12:26:01PM +0100, Jinpu Wang wrote: > > > > > > > We have to admit, the code snip is from null_blk, get_tag function, > > > > not invented by us. > > > > the get_cpu/put_cpu was added to get/save the current cpu_id, which > > > > can be removed around the do-while loop., > > > > we only need to raw_smp_processor_id to get current cpu, we use it > > > > later to pick which connection to use. > > > > > > Be careful copying crazy core code into drivers.. > > > > > > > > You have to do something to provably guarantee the send q cannot > > > > > overflow. send q overflow is defined as calling post_send before a > > > > > poll_cq has confirmed space is available for send. > > > > > > > Shouldn't the cq api handle that already, with IB_POLL_SOFTIRQ, > > > > poll cq is done on very softirq run, so send queue space should be reclaimed > > > > fast enough, with IB_POLL_WORKQUEUE, when cq->com_handler get called, > > > > the ib_cq_poll_work will do the poll_cq, together with extra > > > > send_queue size reserved, > > > > the send queue can not overflow! > > > > > > Somehow that doesn't sound like 'provably guarentee' - that is some > > > statistical argument.. > > Could you give an example which meets the "provably guarantee", > > seems most of the driver is based on the cq API. > > You are supposed to directly keep track of completions and not issue > sends until completions are seen. > > Jason ok, got it! Thanks