[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/08/14 22:17, Sebastien Han wrote:

> @Mark thanks trying this :)
> Unfortunately using nobarrier and another dedicated SSD for the journal  (plus your ceph setting) didn?t bring much, now I can reach 3,5K IOPS.
> By any chance, would it be possible for you to test with a single OSD SSD?
>

Funny you should bring this up - I have just updated my home system with 
a pair of Crucial m550. So figured I;d try a run with 2x ssd 1 for 
journal and 1 for data and 1x ssd (journal + data).


The results were the opposite of what I expected (see below), with 2x 
ssd getting about 6K IOPS and 1 x ssd getting 8K IOPS (wtf):

I'm running this on Ubuntu 14.04 + ceph git master from a few days ago:

$ ceph --version
ceph version 0.84-562-g8d40600 (8d406001d9b84d9809d181077c61ad9181934752)

The data partition was created with:

$ sudo mkfs.xfs -f -l lazy-count=1 /dev/sdd4

and mounted via:

$ sudo mount -o nobarrier,allocsize=4096 /dev/sdd4 /ceph2


I've attached my ceph.conf and the fio template FWIW.

2x Crucial m550 (1x journal, 1x data)

rbd_thread: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, 
iodepth=64
fio-2.1.11-20-g9a44
Starting 1 process
rbd_thread: (groupid=0, jobs=1): err= 0: pid=5511: Sun Aug 31 17:33:40 2014
   write: io=1024.0MB, bw=24694KB/s, iops=6173, runt= 42462msec
     slat (usec): min=11, max=4086, avg=51.19, stdev=59.30
     clat (msec): min=3, max=24, avg= 9.99, stdev= 1.57
      lat (msec): min=3, max=24, avg=10.04, stdev= 1.57
     clat percentiles (usec):
      |  1.00th=[ 6624],  5.00th=[ 7584], 10.00th=[ 8032], 20.00th=[ 8640],
      | 30.00th=[ 9152], 40.00th=[ 9536], 50.00th=[ 9920], 60.00th=[10304],
      | 70.00th=[10816], 80.00th=[11328], 90.00th=[11968], 95.00th=[12480],
      | 99.00th=[13888], 99.50th=[14528], 99.90th=[17024], 99.95th=[19584],
      | 99.99th=[23168]
     bw (KB  /s): min=23158, max=25592, per=100.00%, avg=24711.65, 
stdev=470.72
     lat (msec) : 4=0.01%, 10=50.69%, 20=49.26%, 50=0.04%
   cpu          : usr=25.27%, sys=2.68%, ctx=266729, majf=0, minf=16773
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=83.8%, 
 >=64=15.8%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
 >=64=0.0%
      complete  : 0=0.0%, 4=93.8%, 8=2.9%, 16=2.2%, 32=1.0%, 64=0.1%, 
 >=64=0.0%
      issued    : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
      latency   : target=0, window=0, percentile=100.00%, depth=64

1x Crucial m550 (journal + data)

rbd_thread: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, 
iodepth=64
fio-2.1.11-20-g9a44
Starting 1 process
rbd_thread: (groupid=0, jobs=1): err= 0: pid=6887: Sun Aug 31 17:42:22 2014
   write: io=1024.0MB, bw=32778KB/s, iops=8194, runt= 31990msec
     slat (usec): min=10, max=4016, avg=45.68, stdev=41.60
     clat (usec): min=428, max=25688, avg=7658.03, stdev=1600.65
      lat (usec): min=923, max=25757, avg=7703.72, stdev=1598.77
     clat percentiles (usec):
      |  1.00th=[ 3440],  5.00th=[ 5216], 10.00th=[ 6048], 20.00th=[ 6624],
      | 30.00th=[ 7008], 40.00th=[ 7328], 50.00th=[ 7584], 60.00th=[ 7904],
      | 70.00th=[ 8256], 80.00th=[ 8640], 90.00th=[ 9280], 95.00th=[10048],
      | 99.00th=[12864], 99.50th=[14528], 99.90th=[17536], 99.95th=[19328],
      | 99.99th=[21888]
     bw (KB  /s): min=30768, max=35160, per=100.00%, avg=32907.35, 
stdev=934.80
     lat (usec) : 500=0.01%, 1000=0.01%
     lat (msec) : 2=0.04%, 4=1.80%, 10=93.15%, 20=4.97%, 50=0.04%
   cpu          : usr=32.32%, sys=3.05%, ctx=179657, majf=0, minf=16751
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=59.7%, 
 >=64=40.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
 >=64=0.0%
      complete  : 0=0.0%, 4=96.8%, 8=2.6%, 16=0.5%, 32=0.1%, 64=0.1%, 
 >=64=0.0%
      issued    : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
      latency   : target=0, window=0, percentile=100.00%, depth=64


-------------- next part --------------
[global]
fsid = 35d17fea-ff08-4466-a48a-6294f64ac4ce
mon_initial_members = vedavec
mon_host = 192.168.1.64

debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0

filestore_xattr_use_omap = true
filestore max sync interval = 90
filestore_queue_max_ops = 100000
osd pool default size = 1
[osd]
;osd journal size = 30000
osd journal size = 15000
;osd_op_complatint_time = 10
;osd_op_threads = 4
-------------- next part --------------
######################################################################
# Example test for the RBD engine.
#
# From http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
#
# Runs a 4k random write test agains a RBD via librbd
#
# NOTE: Make sure you have either a RBD named 'voltest' or change
#       the rbdname parameter.
######################################################################
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
ioengine=rbd
clientname=admin
pool=rbd
rbdname=voltest
invalidate=0    # mandatory
rw=randwrite
bs=4k

[rbd_thread]
iodepth=64


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux