Hi,
We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we would like to get your input on this.
The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with the largest pool being rgw and using a replica 3.
For non-technical reasons (budgetary, etc) we are considering getting three more racks, but initially adding only two storage nodes with 36 x 8 TB drives in each, which will basically cause the rack weights to be imbalanced (three racks with weight around a 1000 and 288 OSDs, and three racks with weight around 500 but only 72 OSDs)
The one replica per rack CRUSH rule will cause existing data to be re-balanced among all six racks, with OSDs in the new racks getting only a proportionate amount of replicas.
Do you see any possible problems with this approach? Should Ceph be able to properly rebalance the existing data among racks with imbalanced weights?
Thank you for your input and please let me know if you need additional info.
We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we would like to get your input on this.
The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with the largest pool being rgw and using a replica 3.
For non-technical reasons (budgetary, etc) we are considering getting three more racks, but initially adding only two storage nodes with 36 x 8 TB drives in each, which will basically cause the rack weights to be imbalanced (three racks with weight around a 1000 and 288 OSDs, and three racks with weight around 500 but only 72 OSDs)
The one replica per rack CRUSH rule will cause existing data to be re-balanced among all six racks, with OSDs in the new racks getting only a proportionate amount of replicas.
Do you see any possible problems with this approach? Should Ceph be able to properly rebalance the existing data among racks with imbalanced weights?
Thank you for your input and please let me know if you need additional info.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com