On 5/30/2018 10:49 AM, Christopher Lameter wrote: > On Tue, 29 May 2018, Steve Wise wrote: > >> @@ -200,17 +204,17 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev, >> c->sge[0].length = sizeof(*c->nvme_cmd); >> c->sge[0].lkey = ndev->pd->local_dma_lkey; >> >> - if (!admin) { >> + if (!admin && inline_data_size) { >> c->inline_page = alloc_pages(GFP_KERNEL, >> - get_order(NVMET_RDMA_INLINE_DATA_SIZE)); >> + get_order(inline_data_size)); > Now we do higher order allocations here. This means that the allocation > can fail if system memory is highly fragmented. And the allocations can no > longer be satisfied from the per cpu caches. So allocation performance > will drop. Yes. >> if (!c->inline_page) >> goto out_unmap_cmd; > Maybe think about some sort of fallback to vmalloc or so if this > alloc fails? The memory needs to be physically contiguous and will be mapped for DMA, so vmalloc() won't work. I could complicate the design and allocate a scatter gather table for this memory, and then register it into a single RDMA MR. That would allow allocating non-contiguous pages. But is that complication worth it here? Steve. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html