Re: Slow rbd read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 23 Dec 2019 22:14:15 +0100 Ml Ml wrote:

> Hohoho Merry Christmas and Hello,
> 
> i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and
> normal standard HDDs.
> 
> My problem; with rbd benchmark i get 190MB/sec write, but only
> 45MB/sec read speed.
>
Something is severely off with your testing or cluster if reads are slower
than writes, especially by this margin.
 
> Here is the Setup: https://i.ibb.co/QdYkBYG/ceph.jpg
> 
> I plan to implement a separate switch to separate public from cluster
> network. But i think this is not my current problem here.
> 
You don't mention how many HDDs per server, 10Gbs is fine most likely and
a separate network (either physical or logical) is usually not needed or
beneficial. 
Your results indicate that the HIGHEST peak used 70% of your bandwidth and
that your disks can only maintain 20% of it.

Do your tests consistently with the same tool. 
Neither rados nor rbdbench are ideal, but at least they give ballpark
figures.
FIO on the actual mount on your backup server would be best.

And testing on a ceph node is also prone to skewed results, test from the
actual client, your backup server.

Make sure your network does what you want and monitor the ceph nodes with
ie. atop during the test runs to see where obvious bottlenecks are.

Christian

> I mount the stuff with rbd from the backup server. It seems that i get
> good write, but slow read speed. More details at the end of the mail.
> 
> rados bench -p scbench 30 write --no-cleanup:
> ---------------------------------------------------------------------
> Total time run:         34.269336
> ...
> Bandwidth (MB/sec):     162.945
> Stddev Bandwidth:       198.818
> Max bandwidth (MB/sec): 764
> Min bandwidth (MB/sec): 0
> Average IOPS:           40
> Stddev IOPS:            49
> Max IOPS:               191
> Min IOPS:               0
> Average Latency(s):     0.387122
> Stddev Latency(s):      1.24094
> Max latency(s):         11.883
> Min latency(s):         0.0161869
> 
> 
> Here are the rbd benchmarks run on ceph01:
> ----------------------------------------------------------------------
> rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type write --io-size 8192
> --io-threads 256 --io-total 10G --io-pattern seq
> ...
> elapsed:    56  ops:  1310720  ops/sec: 23295.63  bytes/sec:
> 190837820.82 (190MB/sec) => OKAY
> 
> 
> rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type read --io-size 8192
> --io-threads 256 --io-total 10G --io-pattern seq
> ...
> elapsed:   237  ops:  1310720  ops/sec:  5517.19  bytes/sec:
> 45196784.26 (45MB/sec) => WHY JUST 45MB/sec?
> 
> Since i ran those rbd benchmarks in ceph01, i guess the problem is not
> related to my backup rbd mount at all?
> 
> Thanks,
> Mario
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Mobile Inc.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux