Hohoho Merry Christmas and Hello, i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and normal standard HDDs. My problem; with rbd benchmark i get 190MB/sec write, but only 45MB/sec read speed. Here is the Setup: https://i.ibb.co/QdYkBYG/ceph.jpg I plan to implement a separate switch to separate public from cluster network. But i think this is not my current problem here. I mount the stuff with rbd from the backup server. It seems that i get good write, but slow read speed. More details at the end of the mail. rados bench -p scbench 30 write --no-cleanup: --------------------------------------------------------------------- Total time run: 34.269336 ... Bandwidth (MB/sec): 162.945 Stddev Bandwidth: 198.818 Max bandwidth (MB/sec): 764 Min bandwidth (MB/sec): 0 Average IOPS: 40 Stddev IOPS: 49 Max IOPS: 191 Min IOPS: 0 Average Latency(s): 0.387122 Stddev Latency(s): 1.24094 Max latency(s): 11.883 Min latency(s): 0.0161869 Here are the rbd benchmarks run on ceph01: ---------------------------------------------------------------------- rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type write --io-size 8192 --io-threads 256 --io-total 10G --io-pattern seq ... elapsed: 56 ops: 1310720 ops/sec: 23295.63 bytes/sec: 190837820.82 (190MB/sec) => OKAY rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type read --io-size 8192 --io-threads 256 --io-total 10G --io-pattern seq ... elapsed: 237 ops: 1310720 ops/sec: 5517.19 bytes/sec: 45196784.26 (45MB/sec) => WHY JUST 45MB/sec? Since i ran those rbd benchmarks in ceph01, i guess the problem is not related to my backup rbd mount at all? Thanks, Mario _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com