fio librbd result is poor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

So recently I was testing our ceph cluster which mainly used for block usage(rbd).

We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However the result of fio is very poor.

We tested the workload on ssd pool with following parameter :

"fio --size=50G \

       --ioengine=rbd \

       --direct=1 \

       --numjobs=1 \

       --rw=randwrite(randread) \

       --name=com_ssd_4k_randwrite(randread) \

       --bs=4k \

       --iodepth=32 \

       --pool=ssd_volumes \

       --runtime=60 \

       --ramp_time=30 \

--rbdname=4k_test_image"

and here is the result:

random write:4631;random read:21127 


I also tested  the pool(size=1,min_size=1,pg_num=256) which is consisted by  only one single ssd drive with same workload pattern which is more acceptable.(random write:8303;random read:27859)


We have optimized the linux kernal(read_ahead,disk_scheduler,numa,swappiness) and ceph.conf(client_message,filestore_queue,journal_queue,rbd_cache).And checked the raid cache setting.


The only deficiency for the architecture is the unbalance weight between three racks which one rack has only one storage node.


So can anybody tell us whether  this  number is reasonable.If not,any suggestion to improve the number will be appreciated.


 











 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux