Re: Intel P3700 PCI-e as journal drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/12/2016 4:51 AM, Burkhard Linke wrote:
Hi,

On 01/08/2016 03:02 PM, Paweł Sadowski wrote:
Hi,

Quick results for 1/5/10 jobs:

*snipsnap*
Run status group 0 (all jobs):
WRITE: io=21116MB, aggrb=360372KB/s, minb=360372KB/s, maxb=360372KB/s,
mint=60000msec, maxt=60000msec

*snipsnap*
Run status group 0 (all jobs):
WRITE: io=57723MB, aggrb=985119KB/s, minb=985119KB/s, maxb=985119KB/s,
mint=60001msec, maxt=60001msec

Disk stats (read/write):
   nvme0n1: ios=0/14754265, merge=0/0, ticks=0/253092, in_queue=254880,
util=100.00%
*snipsnap*

Run status group 0 (all jobs):
WRITE: io=65679MB, aggrb=1094.7MB/s, minb=1094.7MB/s, maxb=1094.7MB/s,
mint=60001msec, maxt=60001msec

*snipsnap*

=== START OF INFORMATION SECTION ===
Vendor:               NVMe
Product:              INTEL SSDPEDMD01
Revision:             8DV1
User Capacity:        1,600,321,314,816 bytes [1.60 TB]
Logical block size:   512 bytes
Rotation Rate:        Solid State Device
Thank you for the fast answer. The numbers really look promising! Do you have experience with the speed of these drives with respect to their size? Are the smaller models (e.g. the 400GB one) as fast as the larger ones, or does the speed scale with the overall size, e.g. due to a larger number of flash chips / memory channels?

Attached are similar runs on a 400GB P3700. The 400GB is a little slower than the 1.6TB but not bad.

Regards,
Burkhard

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Script started on Tue 12 Jan 2016 03:04:55 AM EST
[root@hv01 ~]# fio --filename=/dev/nvme0n1p4 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1)
journal-test: (groupid=0, jobs=1): err= 0: pid=87175: Tue Jan 12 03:05:59 2016
  write: io=23805MB, bw=406279KB/s, iops=101569, runt= 60000msec
    clat (usec): min=8, max=6156, avg= 9.52, stdev=17.85
     lat (usec): min=8, max=6156, avg= 9.59, stdev=17.85
    clat percentiles (usec):
     |  1.00th=[    8],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    9], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   11], 95.00th=[   18],
     | 99.00th=[   20], 99.50th=[   23], 99.90th=[   29], 99.95th=[   35],
     | 99.99th=[   51]
    bw (KB  /s): min=368336, max=419216, per=99.98%, avg=406197.88, stdev=11905.08
    lat (usec) : 10=86.21%, 20=12.49%, 50=1.29%, 100=0.01%, 250=0.01%
    lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=18.81%, sys=11.95%, ctx=6094194, majf=0, minf=116
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=6094190/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=23805MB, aggrb=406279KB/s, minb=406279KB/s, maxb=406279KB/s, mint=60000msec, maxt=60000msec

Disk stats (read/write):
  nvme0n1: ios=74/6087837, merge=0/0, ticks=5/43645, in_queue=43423, util=71.60%



[root@hv01 ~]# fio --filename=/dev/nvme0n1p4 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test

journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 5 processes
Jobs: 5 (f=5)
journal-test: (groupid=0, jobs=5): err= 0: pid=87229: Tue Jan 12 03:07:31 2016
  write: io=54260MB, bw=926023KB/s, iops=231505, runt= 60001msec
    clat (usec): min=8, max=12011, avg=20.95, stdev=64.79
     lat (usec): min=8, max=12012, avg=21.06, stdev=64.79
    clat percentiles (usec):
     |  1.00th=[    9],  5.00th=[   10], 10.00th=[   11], 20.00th=[   12],
     | 30.00th=[   13], 40.00th=[   14], 50.00th=[   16], 60.00th=[   17],
     | 70.00th=[   19], 80.00th=[   23], 90.00th=[   28], 95.00th=[   33],
     | 99.00th=[  108], 99.50th=[  203], 99.90th=[  540], 99.95th=[  684],
     | 99.99th=[ 1272]
    bw (KB  /s): min=132048, max=236048, per=20.01%, avg=185337.04, stdev=17916.73
    lat (usec) : 10=2.84%, 20=67.27%, 50=27.54%, 100=1.28%, 250=0.69%
    lat (usec) : 500=0.27%, 750=0.08%, 1000=0.02%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=5.62%, sys=21.65%, ctx=13890559, majf=0, minf=576
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=13890580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=54260MB, aggrb=926023KB/s, minb=926023KB/s, maxb=926023KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  nvme0n1: ios=6/13879611, merge=0/0, ticks=0/226666, in_queue=226376, util=100.00%



[root@hv01 ~]# fio --filename=/dev/nvme0n1p4 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=10 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test

journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 10 processes
Jobs: 10 (f=10)
journal-test: (groupid=0, jobs=10): err= 0: pid=87286: Tue Jan 12 03:09:05 2016
  write: io=53937MB, bw=920511KB/s, iops=230127, runt= 60001msec
    clat (usec): min=8, max=12391, avg=42.68, stdev=115.69
     lat (usec): min=8, max=12391, avg=42.81, stdev=115.69
    clat percentiles (usec):
     |  1.00th=[    9],  5.00th=[   11], 10.00th=[   13], 20.00th=[   15],
     | 30.00th=[   17], 40.00th=[   21], 50.00th=[   26], 60.00th=[   31],
     | 70.00th=[   37], 80.00th=[   46], 90.00th=[   62], 95.00th=[   91],
     | 99.00th=[  390], 99.50th=[  692], 99.90th=[ 1256], 99.95th=[ 1368],
     | 99.99th=[ 4768]
    bw (KB  /s): min=54536, max=117256, per=10.01%, avg=92108.58, stdev=10104.39
    lat (usec) : 10=1.38%, 20=33.90%, 50=47.05%, 100=13.11%, 250=2.99%
    lat (usec) : 500=0.83%, 750=0.30%, 1000=0.18%
    lat (msec) : 2=0.24%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=3.47%, sys=12.54%, ctx=13807845, majf=0, minf=1051
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=13807902/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=53937MB, aggrb=920511KB/s, minb=920511KB/s, maxb=920511KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  nvme0n1: ios=2/13793371, merge=0/0, ticks=1/514106, in_queue=515725, util=100.00%



[root@hv01 ~]# smartctl -d scsi -i /dev/nvme0n1
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               NVMe
Product:              INTEL SSDPEDMD40
Revision:             0131
User Capacity:        400,088,457,216 bytes [400 GB]
Logical block size:   512 bytes
Rotation Rate:        Solid State Device
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux