What is the theoretical upper bandwidth of my Ceph cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I have a Ceph cluster that consists of 3 OSDs (each on a different server’s SSD disk partition with 500MB/s maximum read/write speed).
The 3 OSDs are connected through a switch which provides a maximum 10 Gbits/sec bandwidth between each pair of servers.
My Ceph version is Luminous 12.2.5. I’ve set up a Ceph FS using BlueStore and Erasure Coding pool (k=2, m=1).

What is the theoretical upper bound of read and write speed of my Ceph FS?
Many thanks.

Regards,
Haiyang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux