kRBD write performance for high IO use cases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a fairly large cluster running ceph bluestore with extremely fast SAS ssd for the metadata.  Doing FIO benchmarks I am getting 200k-300k random write iops but during sustained workloads of ElasticSearch my clients seem to hit a wall of around 1100 IO/s per RBD device.  I've tried 1 RBD and 4 RBD devices and I still only get 1100 IO per device, so 4 devices gets me around 4k. 

Is there some sort of setting that limits each RBD devices performance?  I've tried playing with nr_requests but that don't seem to change it at all... I'm just looking for another 20-30% performance on random write io... I even thought about doing raid 0 across 4-8 rbd devices just to get the io performance.

Thoughts?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux