Re: Performance doesn't scale well on a full ssd cluster.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark, please read this: https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg12486.html

On 16 Oct 2014, at 19:19, Mark Wu <wudx05@xxxxxxxxx> wrote:

> 
> Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak.
> 
> 2014 年 10 月 17 日 上午 1:03于 wudx05@xxxxxxxxx写道:
> Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak.
> 
> 2014 年 10 月 17 日 上午 12:55于 "Daniel Schwager" <Daniel.Schwager@xxxxxxxx>写道:
> Hi Mark,
> 
>  
> 
> maybe you will check rbd-enabled fio
> 
>                 http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
> 
>  
> 
>                 yum install ceph-devel
> 
>                 git clone git://git.kernel.dk/fio.git
> 
>                 cd fio ; ./configure ; make -j5 ; make install
> 
>  
> 
> Setup the number of jobs (==clients) inside fio config to
> 
>                 numjobs=8
> 
> for simulating multiple clients.
> 
>  
> 
>  
> 
> regards
> 
> Danny
> 
>  
> 
>  
> 
> my test.fio:
> 
>  
> 
> [global]
> 
> #logging
> 
> #write_iops_log=write_iops_log
> 
> #write_bw_log=write_bw_log
> 
> #write_lat_log=write_lat_log
> 
> ioengine=rbd
> 
> clientname=admin
> 
> pool=rbd
> 
> rbdname=myimage
> 
> invalidate=0    # mandatory
> 
> rw=randwrite
> 
> bs=1m
> 
> runtime=120
> 
> iodepth=8
> 
> numjobs=8
> 
>  
> 
> time_based
> 
> #direct=0
> 
>  
> 
>  
> 
> [seq-write]
> 
> stonewall
> 
> rw=write
> 
>  
> 
> #[seq-read]
> 
> #stonewall
> 
> #rw=read
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
–––– 
Sébastien Han 
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han@xxxxxxxxxxxx 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux