ceph is a distributed system, it scales
by concurrent access to nodes.
generally a single client will access a single OSD at the time, iow max possible single thread read is the read speak of the drive. and max possible write is single drive write / (replication size-1) but when you have many vm's accessing the same cluster the load is spread all over (just like when you see the recovery running) A single spinning disk should be able to do 100-150MB/s depending on make and model. even with the overhead of ceph and networking so i still think 20MB/s is a bit on the low side, depending on how you benchmark. I would start by going thru this benchmarking guide, and see if you find some issues: https://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance in order to get more singlethread performance out of ceph you must get faster individual parts ( nvram disks/fast ram and processors/fast network/etc/etc) or you can cheat by either spreading the load over more disks. eg you can do rbd fancy striping, or attach multiple disk's with individual controllers in the vm. or use caching and /or readahead. when it comes to cache tiering i would remove that, it does not get the love it needs. and redhat have even stopped supporting it in deployments. but you can use dm-cache or bcache on osd's or/and rbd-cache on kvm clients. good luck Ronny Aasen On 09.09.2018 11:20, Alex Lupsa wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com