> > > This series adds support to the RDMA core to implicitly allocate the required > > CQEs when creating a QP. The primary driver for that was to implement a > > common scheme for CQ pooling, which helps with better resource usage for > > server / target style drivers that have many outstanding connections. In > > fact the first version of this code from Sagi did just that: add a CQ pool > > API, and convert drivers that were using some form of pooling (iSER initiator > > & target, NVMe target) to that API. But looking at the API I felt that > > there was still way too much logic in the individual ULPs, and looked into a > > way to make that boilerplate code go away. It turns out that we can simply > > create CQs underneath if we know the poll context that the ULP requires, so > > this series shows an approach that makes CQs mostly invisible to ULPs. > > One other note that I wanted to raise for the folks interested in this > is that with the RDMA core owning the completion queue pools, different > ULPs can easily share the same completion queue (given that it uses > the same poll context). For example, nvme-rdma host, iser and srp > initiators can end up using the same completion queues (if running > simultaneously on the same machine). > > Up until now, I couldn't think of anything that can introduce a problem > with that but maybe someone else will... It would be useful to provide details on how many CQs get created and of what size for an uber iSER/NVMF/SRP initiator/host and target. One concern I have is that cxgb4 CQs require contiguous memory, So a scheme like CQ pooling might cause resource problems on large core systems. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html