Re: reduce iSERT Max IO size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/7/2020 6:36 AM, Krishnamraju Eraparaju wrote:
On Sunday, October 10/04/20, 2020 at 00:45:26 +0300, Max Gurtovoy wrote:
On 10/3/2020 6:36 AM, Krishnamraju Eraparaju wrote:
On Friday, October 10/02/20, 2020 at 13:29:30 -0700, Sagi Grimberg wrote:
Hi Sagi & Max,

Any update on this?
Please change the max IO size to 1MiB(256 pages).
I think that the reason why this was changed to handle the worst case
was in case there are different capabilities on the initiator and the
target with respect to number of pages per MR. There is no handshake
that aligns expectations.
But, the max pages per MR supported by most adapters is around 256 pages
only.
And I think only those iSER initiators, whose max pages per MR is 4096,
could send 16MiB sized IOs, am I correct?
If the initiator can send 16MiB, we must make sure the target is
capable to receive it.
I think max IO size, at iSER initiator, depends on
"max_fast_reg_page_list_len".
currently, below are the supported "max_fast_reg_page_list_len" of
various iwarp drivers:

iw_cxgb4: 128 pages
Softiwarp: 256 pages
i40iw: 512 pages
qedr: couldn't find.

For iwarp case, if 512 is the max pages supported by all iwarp drivers,
then provisioning a gigantic MR pool at target(to accommodate never used
16MiB IO) wouldn't be a overkill?

For RoCE/IB Mellanox HCAs we support 16MiB IO size and even more. We limited to 16MiB in iSER/iSERT.

Sagi,

what about adding a module parameter for this as we did in iSER initiator ?

If we revert that it would restore the issue that you reported in the
first place:

--
IB/isert: allocate RW ctxs according to max IO size
I don't see the reported issue after reducing the IO size to 256
pages(keeping all other changes of this patch intact).
That is, "attr.cap.max_rdma_ctxs" is now getting filled properly with
"rdma_rw_mr_factor()" related changes, I think.

Before this change "attr.cap.max_rdma_ctxs" was hardcoded with
128(ISCSI_DEF_XMIT_CMDS_MAX) pages, which is very low for single target
and muli-luns case.

So reverting only ISCSI_ISER_MAX_SG_TABLESIZE macro to 256 doesn't cause the
reported issue.

Thanks,
Krishnam Raju.
Current iSER target code allocates MR pool budget based on queue size.
Since there is no handshake between iSER initiator and target on max IO
size, we'll set the iSER target to support upto 16MiB IO operations and
allocate the correct number of RDMA ctxs according to the factor of MR's
per IO operation. This would guaranty sufficient size of the MR pool for
the required IO queue depth and IO size.

Reported-by: Krishnamraju Eraparaju <krishna2@xxxxxxxxxxx>
Tested-by: Krishnamraju Eraparaju <krishna2@xxxxxxxxxxx>
Signed-off-by: Max Gurtovoy <maxg@xxxxxxxxxxxx>
--

Thanks,
Krishnam Raju.
On Wednesday, September 09/23/20, 2020 at 01:57:47 -0700, Sagi Grimberg wrote:
Hi,

Please reduce the Max IO size to 1MiB(256 pages), at iSER Target.
The PBL memory consumption has increased significantly after increasing
the Max IO size to 16MiB(with commit:317000b926b07c).
Due to the large MR pool, the max no.of iSER connections(On one variant
of Chelsio cards) came down to 9, before it was 250.
NVMe-RDMA target also uses 1MiB max IO size.
Max, remind me what was the point to support 16M? Did this resolve
an issue?



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux