On 2/28/2020 1:14 AM, Sagi Grimberg wrote:
The patch doesn't say if this is an actual bug you are seeing or
theoretical.
I've noticed this issue while running the below fio command:
fio --rw=randwrite --name=random --norandommap --ioengine=libaio
--size=16m --group_reporting --exitall --fsync_on_close=1 --invalidate=1
--direct=1 --filename=/dev/nvme2n1 --iodepth=32 --numjobs=16
--unit_base=1 --bs=4m --kb_base=1000
Note: here NVMe Host is on SIW & Target is on iw_cxgb4 and the
max_pages_per_mr supported by SIW and iw_cxgb4 are 255 and 128
respectively.
This needs to be documented in the change log.
The proposed patch enables host to advertise the max_fr_pages(via
nvme_rdma_cm_req) such that target can allocate that many number of
RW ctxs(if host's max_fr_pages is higher than target's).
As mentioned by Jason, this s a non-compatible change, if you want to
introduce this you need to go through the standard and update the
cm private_data layout (would mean that the fmt needs to increment as
well to be backward compatible).
Sure, will initiate a discussion at NVMe TWG about CM private_data
format.
Will update the response soon.
As a stop-gap, nvmet needs to limit the controller mdts to how much
it can allocate based on the HCA capabilities
(max_fast_reg_page_list_len).
Sounds good, please look at capping mdts in the mean time.
guys, see my patches from adding MD support.
I'm setting mdts per ctrl there.
we can merge it meantime for this issue.