On Sun, Oct 19, 2014 at 6:50 PM, Sagi Grimberg <sagig@xxxxxxxxxxxxxxxxxx> wrote: > On 10/16/2014 8:31 AM, Or Gerlitz wrote: >> >> On Thu, Oct 16, 2014 at 1:41 AM, Minh Duc Tran <MinhDuc.Tran@xxxxxxxxxx> >> wrote: >>> >>> With the HW and fw profile we are running with the ocrdma currently, it's >>> 8k per CQ. This number could change if we run on different hw or fw >>> profile. >> >> >> OK. So CQEs per CQ wise, there's nothing in the ocrdma (sw/fw/fw) >> which is extremely different. The more major difference is the >> relatively small numbers >> of CQs per device you can support on your driver. >> >> Sorry for being a bit short and not explaining everything, I'm on LPC >> 2014 so a bit busy... but AFAI-See-This, >> here's the list of TODO items here: >> >> 1. change the the number of CQs to be min(num_cpus, 1/2 of what the >> device can support) >> 2. add the # of SCSI commands per session and y/s immediate data is >> supported for this session to ep_connect_with_params >> >> Sagi, agree? >> >> #1 is pretty easy and we actually have it ready for 3.19 > Maybe even 3.18? It doesn't fix anything, I don't really see the point. >> #2 should be easy too, Max, please add it to your TODO for the ep >> connect changes >> > > I don't think we need it in ep_connect. > We can create CQs/QPs with min(desired, device_support) and just keep > the sizes and adjust session cmds_max at session creation time, and max > QPs per CQ. By "desired" you mean a hard coded maximum on the session cmds_max as we have today (512)? > What I am concerned about is that we don't enforce max QPs per > CQ. we have never seen this overrun - but no reason why it wouldn't > happen. I have it on my todo list, but we need to take care of it soon. > This issue would be somewhat relaxed if we got rid of the artificial > ISER_MAX_CQ. I haven't seen this coming into play too. But I tend to agree that once we go for the larger # of CQs, we should be @ a more robust state. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html