Hello all, At this moment we have a scenario where i would like your opinion on. Scenario: Currently we have a ceph environment with 1 rack of hardware, this rack contains a couple of OSD nodes with 4T disks. In a few months time we will deploy 2 more racks with OSD nodes, these nodes have 6T disks and 1 node more per rack. Short overview: rack1: 4T OSD rack2: 6T OSD rack3: 6T OSD At this moment we are playing around with the idea to use the CRUSH map to make ceph 'rack aware' and ensure to have data replicated between racks. However from documentation i gathered i found that when you enforce data replication between buckets then your max storage size will be the lowest bucket value. My understanding: enforce the objects (size=3) to be replicated to 3 racks, the moment the rack with 4T OSD's is full we cannot store data anymore. Is this assumption correct? The current idea we play with: - Create 2 rack buckets - Create a ruleset to create 2 object replica’s for the 2x 6T buckets - Create a ruleset to create 1 object replica over all the hosts. This would result in 3 replicas of the object. Where we are sure that 2 objects at least are in different racks. In the unlikely event of a rack failure we would have at least 1 or 2 replica’s left. Our idea is to have a crush rule with config that looks like: Is this the right approach and would this cause limitations in regards of performance or usability? Do you have suggestions? Another interesting situation we have now is: We are going to move the hardware to new locations next year, the rack layout will change and thus the crush map will be altered. When changing a CRUSH map that theoretically would change the 2x 6T racks into 4 racks, would we need to take any special actions into consideration? Thank you for your answers, they are much appreciated! Rogier Dikkes System Programmer Hadoop & HPC Cloud SURFsara | Science Park 140 | 1098 XG Amsterdam |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com