Re: NVMe over RDMA latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




With real NVMe device on target, host see latency about 33us.

root@host:~# fio t.job
job1: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.9-3-g2078c
Starting 1 process
Jobs: 1 (f=1): [r(1)] [100.0% done] [113.1MB/0KB/0KB /s] [28.1K/0/0 iops] [eta 00m:00s]
job1: (groupid=0, jobs=1): err= 0: pid=3139: Wed Jul 13 11:22:15 2016
   read : io=2259.5MB, bw=115680KB/s, iops=28920, runt= 20001msec
     slat (usec): min=1, max=195, avg= 2.62, stdev= 1.24
     clat (usec): min=0, max=7962, avg=30.97, stdev=14.50
      lat (usec): min=27, max=7968, avg=33.70, stdev=14.69

And tested NVMe device locally on target, about 23us.
So nvmeof added only about ~10us.

That's nice!

I didn't understand what was changed?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux