+int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr) +{ + struct ib_device *dev = qp->pd->device; + int ret = 0; + + if (rdma_rw_use_mr(dev, attr->port_num)) { + ret = ib_mr_pool_init(qp, &qp->rdma_mrs, + attr->cap.max_rdma_ctxs, IB_MR_TYPE_MEM_REG, + dev->attrs.max_fast_reg_page_list_len);
Christoph, This is a problem for mlx5 which exposes: props->max_fast_reg_page_list_len = (unsigned int)-1; Which is obviously wrong and needs to be corrected, but this is sort of an overkill to allocate max supported unconditionally. How about choosing a sane default of 256/512 pages for now? I don't think we'll see a lot of larger transfers in iser/nvmf (which actually need MRs for iWARP). Alternatively we can allow the caller to limit the MR size? -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html