Hello all, At this moment we have a scenario where i would like your opinion on. Scenario: Currently we have a ceph environment with 1 rack of hardware, this rack contains a couple of OSD nodes with 4T disks. In a few weeks time we will deploy 2 more racks with OSD nodes, these nodes have 6T disks. Short overview: rack1: 4T OSD rack2: 6T OSD rack3: 6T OSD At this moment we are playing around with the idea to use the CRUSH map to make ceph rack aware and ensure to have data replicated between racks. However from documentation i gathered i found that when you enforce data replication between buckets then your max storage size will be the lowest bucket value. My understanding: enforce the objects (size=3) to be replicated to all racks, the moment the rack with 4T OSD's is full we cannot store data anymore. Is this assumption correct? The current idea we play with: - Create 3 rack buckets - Create 1 room bucket with these 3 racks - Create a ruleset to create 2 object replica’s for the 2x 6T buckets - Create a ruleset to create 1 object replica over the room This would result in 3 replicas of the object. Our idea is to have a crush rule with config that looks like: Is this the right approach and would this cause limitations in regards of performance or usability? Do you have suggestions? Rogier Dikkes Systeem Programmeur Hadoop & HPC Cloud e-mail: rogier.dikkes@xxxxxxxxxxx SURFsara | Science Park 140 | 1098 XG Amsterdam |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com