[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/08/14 04:11, Sebastien Han wrote:
> Hey all,

> See my fio template:
>
> [global]
> #logging
> #write_iops_log=write_iops_log
> #write_bw_log=write_bw_log
> #write_lat_log=write_lat_lo
>
> time_based
> runtime=60
>
> ioengine=rbd
> clientname=admin
> pool=test
> rbdname=fio
> invalidate=0    # mandatory
> #rw=randwrite
> rw=write
> bs=4k
> #bs=32m
> size=5G
> group_reporting
>
> [rbd_iodepth32]
> iodepth=32
> direct=1
>
> See my rio output:
>
> rbd_iodepth32: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
> fio-2.1.11-14-gb74e
> Starting 1 process
> rbd engine: RBD version: 0.1.8
> Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/12876KB/0KB /s] [0/3219/0 iops] [eta 00m:00s]
> rbd_iodepth32: (groupid=0, jobs=1): err= 0: pid=32116: Thu Aug 28 00:28:26 2014
>    write: io=771448KB, bw=12855KB/s, iops=3213, runt= 60010msec
>      slat (usec): min=42, max=1578, avg=66.50, stdev=16.96
>      clat (msec): min=1, max=28, avg= 9.85, stdev= 1.48
>       lat (msec): min=1, max=28, avg= 9.92, stdev= 1.47
>      clat percentiles (usec):
>       |  1.00th=[ 6368],  5.00th=[ 8256], 10.00th=[ 8640], 20.00th=[ 9152],
>       | 30.00th=[ 9408], 40.00th=[ 9664], 50.00th=[ 9792], 60.00th=[10048],
>       | 70.00th=[10176], 80.00th=[10560], 90.00th=[10944], 95.00th=[11456],
>       | 99.00th=[13120], 99.50th=[16768], 99.90th=[25984], 99.95th=[27008],
>       | 99.99th=[28032]
>      bw (KB  /s): min=11864, max=13808, per=100.00%, avg=12864.36, stdev=407.35
>      lat (msec) : 2=0.03%, 4=0.54%, 10=59.79%, 20=39.24%, 50=0.41%
>    cpu          : usr=19.15%, sys=4.69%, ctx=326309, majf=0, minf=426088
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=33.9%, 32=66.1%, >=64=0.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>       complete  : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.1%, 32=0.1%, 64=0.0%, >=64=0.0%
>       issued    : total=r=0/w=192862/d=0, short=r=0/w=0/d=0
>       latency   : target=0, window=0, percentile=100.00%, depth=32
>

Hi Sebastien,

Looking at your fio template - were you running with rw=write or 
rw=randwrite? If the latter, mounting (xfs) with nobarrier seems to get 
much better results [1]. The run below is for a single osd on an xfs 
partition from an Intel 520. I'm using another 520 as a journal:

rbd_thread: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, 
iodepth=64
fio-2.1.11-20-g9a44
Starting 1 process
rbd engine: RBD version: 0.1.8
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/23480KB/0KB /s] [0/5870/0 iops] 
[eta 00m:00s]
rbd_thread: (groupid=0, jobs=1): err= 0: pid=2820: Fri Aug 29 13:59:13 2014
   write: io=1024.0MB, bw=27540KB/s, iops=6885, runt= 38074msec
     slat (usec): min=16, max=4323, avg=52.28, stdev=65.23
     clat (usec): min=565, max=63714, avg=9014.80, stdev=3814.57
      lat (usec): min=949, max=63774, avg=9067.07, stdev=3811.52
     clat percentiles (usec):
      |  1.00th=[ 3312],  5.00th=[ 4448], 10.00th=[ 5216], 20.00th=[ 6240],
      | 30.00th=[ 7072], 40.00th=[ 7776], 50.00th=[ 8512], 60.00th=[ 9280],
      | 70.00th=[10176], 80.00th=[11328], 90.00th=[13120], 95.00th=[14912],
      | 99.00th=[19328], 99.50th=[21888], 99.90th=[48384], 99.95th=[51968],
      | 99.99th=[56064]
     bw (KB  /s): min=20128, max=30400, per=100.00%, avg=27564.95, 
stdev=1448.85
     lat (usec) : 750=0.01%, 1000=0.01%
     lat (msec) : 2=0.02%, 4=2.97%, 10=65.43%, 20=30.77%, 50=0.73%
     lat (msec) : 100=0.08%
   cpu          : usr=29.17%, sys=3.49%, ctx=208270, majf=0, minf=16761
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.5%, 32=72.2%, 
 >=64=27.2%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
 >=64=0.0%
      complete  : 0=0.0%, 4=94.9%, 8=3.3%, 16=1.3%, 32=0.4%, 64=0.1%, 
 >=64=0.0%
      issued    : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
      latency   : target=0, window=0, percentile=100.00%, depth=64



Regards

Mark

[1] I'm thinking it should be safe to disable barriers as ceph seems to 
do fsync and friends when it needs stuff to persist...however would be 
good to confirm this - guys?


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux