Re: Benchmarking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use fio.

 

#1.  Be sure to have a job that writes random data to 100% of the RBD device before starting.  I use the size=100%.

#2.  Benchmark something that makes sense for your use case, eg. benchmarking 1M sequential writes makes no sense if your workload is 64k random.

#3.  A follow on to number 2.  If the workload is latency sensitive, use latency targets.  The following settings ramp up to queue depth to find the maximum setting that keeps 99.99999% of I/Os inside the 10ms latency target over a 5 second window:

     latency_target=10ms

     latency_window=5s

     latency_percentile=99.99999

#4. Use the ramp_time option to cut off data at the front of the test.  This will remove initial spikes from the data.

#5.  Use long running tests with larger data sets.  Short tests will exhibit the results of caching, and that may be okay at times, but generally, it’s a good idea to have a long running test (30+ minutes) that demonstrates the cluster’s performance under steady load.

 

 

David Byte

Sr. Technology Strategist

SCE Enterprise Linux 

SCE Enterprise Storage
Alliances and SUSE Embedded
dbyte@xxxxxxxx
918.528.4422

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Nino Bosteels <n.bosteels@xxxxxxxxxxxxx>
Date: Tuesday, June 19, 2018 at 7:08 AM
To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: [ceph-users] Benchmarking

 

Hi,

 

Anyone got tips as to how to best benchmark a Ceph blockdevice (RBD)?

 

I’ve currently found the more traditional ways (dd, iostat, bonnie++, phoronix test suite) and fio which actually supports the rbd-engine.

 

Though there’s not a lot of information about it to be found online (contrary to e.g. benchmarking zfs).

 

Currently I’m using this, closest to the usage of BackupPC-software available (small randreads) with fio:

 

[global]

ioengine=rbd

clientname=admin

pool=[poolname]

rbdname=[rbdname]

rw=randread

randrepeat=1

direct=1

ramp_time=4

bs=4k

[rbd_iodepth32]

iodepth=32

 

Any ideas (or questions) welcome !

 

Nino

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux