Re: Performance doesn't scale well on a full ssd cluster.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak.

2014 年 10 月 17 日 上午 1:03于 wudx05@xxxxxxxxx写道:

Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak.

2014 年 10 月 17 日 上午 12:55于 "Daniel Schwager" <Daniel.Schwager@xxxxxxxx>写道:

Hi Mark,

 

maybe you will check rbd-enabled fio

                http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html

 

                yum install ceph-devel

                git clone git://git.kernel.dk/fio.git

                cd fio ; ./configure ; make -j5 ; make install

 

Setup the number of jobs (==clients) inside fio config to

                numjobs=8

for simulating multiple clients.

 

 

regards

Danny

 

 

my test.fio:

 

[global]

#logging

#write_iops_log=write_iops_log

#write_bw_log=write_bw_log

#write_lat_log=write_lat_log

ioengine=rbd

clientname=admin

pool=rbd

rbdname=myimage

invalidate=0    # mandatory

rw=randwrite

bs=1m

runtime=120

iodepth=8

numjobs=8

 

time_based

#direct=0

 

 

[seq-write]

stonewall

rw=write

 

#[seq-read]

#stonewall

#rw=read

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux