general ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
we are currently planning a new ceph cluster which will be used for 
virtualization (providing RBD storage for KVM machines) and we have some 
general questions.

* Is it advisable to have one ceph cluster spread over multiple datacenters 
(latency is low, as they are not so far from each other)? Is anybody doing 
this in a production setup? We know that any network issue would affect virtual 
machines in all locations instead just one, but we can see a lot of advantages 
as well.

* We are planning to combine the hosts for ceph and KVM (so far we are using 
seperate hosts for virtual machines and ceph storage). We see the big 
advantage (next to the price drop) of an automatic ceph expansion when adding 
more compute nodes as we got into situations in the past where we had too many 
compute nodes and the ceph cluster was not expanded properly (performance 
dropped over time). On the other side there would be changes to the crush map 
every time we add a compute node and that might end in a lot of data movement 
in ceph. Is anybody using combined servers for compute and ceph storage and 
has some experience?

* is there a maximum amount of OSDs in a ceph cluster? We are planning to use 
a minimum of 8 OSDs per server and going to have a cluster with about 100 
servers which would end in about 800 OSDs.

Thanks for any help...

Cheers
Nick

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux