Single threaded IOPS on SSD pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

This is more an inquiry to figure out how our current setup compares
to other setups. I have a 3 x replicated SSD pool with RBD images.
When running fio on /tmp I'm interested in seeing how much IOPS a
single thread can get - as Ceph scales up very nicely with concurrency.

Currently 34 OSD of ~896GB Intel D3-4510's each over 7 OSD-hosts.

jk@iguana:/tmp$ for i in 01 02 03 04 05 06 07; do ping -c 10 ceph-osd$i;
done  |egrep '(statistics|rtt)'
--- ceph-osd01.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.316/0.381/0.483/0.056 ms
--- ceph-osd02.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.293/0.415/0.625/0.100 ms
--- ceph-osd03.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.319/0.395/0.558/0.074 ms
--- ceph-osd04.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.224/0.352/0.492/0.077 ms
--- ceph-osd05.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.257/0.360/0.444/0.059 ms
--- ceph-osd06.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.209/0.334/0.442/0.062 ms
--- ceph-osd07.nzcorp.net ping statistics ---
rtt min/avg/max/mdev = 0.259/0.401/0.517/0.069 ms

Ok, average network latency from VM to OSD's ~0.4ms.

$ fio fio-job-randr.ini
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [r(1)] [100.0% done] [2145KB/0KB/0KB /s] [536/0/0 iops]
[eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=29519: Wed Jun  5 08:40:51 2019
  Description  : [fio random 4k reads]
  read : io=143352KB, bw=2389.2KB/s, iops=597, runt= 60001msec
    slat (usec): min=8, max=1925, avg=30.24, stdev=13.56
    clat (usec): min=7, max=321039, avg=1636.47, stdev=4346.52
     lat (usec): min=102, max=321074, avg=1667.58, stdev=4346.57
    clat percentiles (usec):
     |  1.00th=[  157],  5.00th=[  844], 10.00th=[  924], 20.00th=[ 1012],
     | 30.00th=[ 1096], 40.00th=[ 1160], 50.00th=[ 1224], 60.00th=[ 1304],
     | 70.00th=[ 1400], 80.00th=[ 1528], 90.00th=[ 1768], 95.00th=[ 2128],
     | 99.00th=[11328], 99.50th=[18304], 99.90th=[51456], 99.95th=[94720],
     | 99.99th=[216064]
    bw (KB  /s): min=    0, max= 3089, per=99.39%, avg=2374.50, stdev=472.15
    lat (usec) : 10=0.01%, 100=0.01%, 250=2.95%, 500=0.03%, 750=0.27%
    lat (usec) : 1000=14.96%
    lat (msec) : 2=75.87%, 4=2.99%, 10=1.78%, 20=0.73%, 50=0.30%
    lat (msec) : 100=0.07%, 250=0.03%, 500=0.01%
  cpu          : usr=0.76%, sys=3.29%, ctx=38871, majf=0, minf=11
  IO depths    : 1=108.2%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued    : total=r=35838/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=143352KB, aggrb=2389KB/s, minb=2389KB/s, maxb=2389KB/s,
mint=60001msec, maxt=60001msec

Disk stats (read/write):
  vda: ios=38631/51, merge=0/3, ticks=62668/40, in_queue=62700, util=96.77%


And fio-file:
$ cat fio-job-randr.ini
[global]
readwrite=randread
blocksize=4k
ioengine=libaio
numjobs=1
thread=0
direct=1
iodepth=1
group_reporting=1
ramp_time=5
norandommap=1
description=fio random 4k reads
time_based=1
runtime=60
randrepeat=0

[test]
size=1g


Single threaded performance ~500-600 IOPS - or average latency of 1.6ms
Is that comparable to what other are seeing?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux