Current nvmet-rdma code allocates MR pool budget based on host's SQ size, assuming both host and target use the same "max_pages_per_mr" count. But if host's max_pages_per_mr is greater than target's, then target can run out of MRs while processing larger IO WRITEs. That is, say host's SQ size is 100, then the MR pool budget allocated currently at target will also be 100 MRs. But 100 IO WRITE Requests with 256 sg_count(IO size above 1MB) require 200 MRs when target's "max_pages_per_mr" is 128.
The patch doesn't say if this is an actual bug you are seeing or theoretical.
The proposed patch enables host to advertise the max_fr_pages(via nvme_rdma_cm_req) such that target can allocate that many number of RW ctxs(if host's max_fr_pages is higher than target's).
As mentioned by Jason, this s a non-compatible change, if you want to introduce this you need to go through the standard and update the cm private_data layout (would mean that the fmt needs to increment as well to be backward compatible). As a stop-gap, nvmet needs to limit the controller mdts to how much it can allocate based on the HCA capabilities (max_fast_reg_page_list_len).