On 3/19/2022 9:29 AM, Yi Zhang wrote:
On Wed, Mar 16, 2022 at 11:16 PM Sagi Grimberg <sagi@xxxxxxxxxxx> wrote:
Hi Yi Zhang,
thanks for testing the patches.
Can you provide more info on the time it took with both kernels ?
Hi Max
Sorry for the late response, here are the test results/dmesg on
debug/non-debug kernel with your patch:
debug kernel: timeout
# time nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn
real 0m16.956s
user 0m0.000s
sys 0m0.237s
# time nvme reset /dev/nvme0
real 1m33.623s
user 0m0.000s
sys 0m0.024s
# time nvme disconnect-all
real 1m26.640s
user 0m0.000s
sys 0m9.969s
host dmesg:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpastebin.com%2F8T3Lqtkn&data=04%7C01%7Cmgurtovoy%40nvidia.com%7Cc89cc47d8acf4ef3256408da097a3305%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637832717692265478%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=qtZ8E6cvHlSu8LbUkBa0ehhguyQRfP%2B%2BC8BEonDNj9Y%3D&reserved=0
target dmesg:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpastebin.com%2FKpFP7xG2&data=04%7C01%7Cmgurtovoy%40nvidia.com%7Cc89cc47d8acf4ef3256408da097a3305%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637832717692265478%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=DerGWqQmWm9C30FFGbb5AcU%2B%2BrBErKClXzFlqSJT7jw%3D&reserved=0
non-debug kernel: no timeout issue, but still 12s for reset, and 8s
for disconnect
host:
# time nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn
real 0m4.579s
user 0m0.000s
sys 0m0.004s
# time nvme reset /dev/nvme0
real 0m12.778s
user 0m0.000s
sys 0m0.006s
# time nvme reset /dev/nvme0
real 0m12.793s
user 0m0.000s
sys 0m0.006s
# time nvme reset /dev/nvme0
real 0m12.808s
user 0m0.000s
sys 0m0.006s
# time nvme disconnect-all
real 0m8.348s
user 0m0.000s
sys 0m0.189s
These are very long times for a non-debug kernel...
Max, do you see the root cause for this?
Yi, does this happen with rxe/siw as well?
Hi Sagi
rxe/siw will take less than 1s
with rdma_rxe
# time nvme reset /dev/nvme0
real 0m0.094s
user 0m0.000s
sys 0m0.006s
with siw
# time nvme reset /dev/nvme0
real 0m0.097s
user 0m0.000s
sys 0m0.006s
This is only reproducible with mlx IB card, as I mentioned before, the
reset operation time changed from 3s to 12s after the below commit,
could you check this commit?
commit 5ec5d3bddc6b912b7de9e3eb6c1f2397faeca2bc
Author: Max Gurtovoy <maxg@xxxxxxxxxxxx>
Date: Tue May 19 17:05:56 2020 +0300
nvme-rdma: add metadata/T10-PI support
I couldn't repro these long reset times.
Nevertheless, the above commit added T10-PI offloads.
In this commit, for supported devices we create extra resources in HW
(more memory keys per task).
I suggested doing this configuration as part of the "nvme connect"
command and save this resource allocation by default but during the
review I was asked to make it the default behavior.
Sagi/Christoph,
WDYT ? should we reconsider the "nvme connect --with_metadata" option ?