On 12/15/2021 3:15 AM, Yi Zhang wrote:
On Tue, Dec 14, 2021 at 8:01 PM Max Gurtovoy <mgurtovoy@xxxxxxxxxx> wrote:
On 12/14/2021 12:39 PM, Sagi Grimberg wrote:
Hi Sagi
It is still reproducible with the change, here is the log:
# time nvme reset /dev/nvme0
real 0m12.973s
user 0m0.000s
sys 0m0.006s
# time nvme reset /dev/nvme0
real 1m15.606s
user 0m0.000s
sys 0m0.007s
Does it speed up if you use less queues? (i.e. connect with -i 4) ?
Yes, with -i 4, it has stablee 1.3s
# time nvme reset /dev/nvme0
So it appears that destroying a qp takes a long time on
IB for some reason...
real 0m1.225s
user 0m0.000s
sys 0m0.007s
# dmesg | grep nvme
[ 900.634877] nvme nvme0: resetting controller
[ 909.026958] nvme nvme0: creating 40 I/O queues.
[ 913.604297] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[ 917.600993] nvme nvme0: resetting controller
[ 988.562230] nvme nvme0: I/O 2 QID 0 timeout
[ 988.567607] nvme nvme0: Property Set error: 881, offset 0x14
[ 988.608181] nvme nvme0: creating 40 I/O queues.
[ 993.203495] nvme nvme0: mapped 40/0/0 default/read/poll queues.
BTW, this issue cannot be reproduced on my NVME/ROCE environment.
Then I think that we need the rdma folks to help here...
Max?
It took me 12s to reset a controller with 63 IO queues with 5.16-rc3+.
Can you try repro with latest versions please ?
Or give the exact scenario ?
Yeah, both target and client are using Mellanox Technologies MT27700
Family [ConnectX-4], could you try stress "nvme reset /dev/nvme0", the
first time reset will take 12s, and it always can be reproduced at the
second reset operation.
I created a target with 1 namespace backed by null_blk and connected to
it from the same server in loopback rdma connection using the ConnectX-4
adapter.
I run a loop with the "nvme reset" command and it took me 4-5 secs to
reset each time..