If you force CRUSH to put copies in each rack, then you will be limited by the smallest rack. You can have some sever limitations if you try to keep your copies to two racks (see the thread titles "CRUSH rule for 3 replicas across 2 hosts") for some of my explanation about this.
If I were you, I would install almost all the new hardware and hold out a few pieces. Get the new hardware up and running, then take down some of the original hardware and relocate it in the other cabinets so that you even out the older lower capacity nodes and new higher capacity nodes in each cabinet. That would give you the best of redundancy and performance (not all PGs would have to have a replica on the potentially slower hardware). This would allow you to have replication level three and able to lose a rack.
Another options if you have the racks is to spread the new hardware over 3 racks instead of 2 so that your cluster is over 4 racks. CRUSH will give a preference to the newer hardware (assuming the CRUSH weights reflect the size of the disk) and you would no longer be limited by the older smaller rack.
On Thu, Apr 23, 2015 at 3:20 AM, Rogier Dikkes <rogier.dikkes@xxxxxxxxxxx> wrote:
Hello all,At this moment we have a scenario where i would like your opinion on.Scenario:Currently we have a ceph environment with 1 rack of hardware, this rack contains a couple of OSD nodes with 4T disks. In a few months time we will deploy 2 more racks with OSD nodes, these nodes have 6T disks and 1 node more per rack.Short overview:rack1: 4T OSDrack2: 6T OSDrack3: 6T OSDAt this moment we are playing around with the idea to use the CRUSH map to make ceph 'rack aware' and ensure to have data replicated between racks. However from documentation i gathered i found that when you enforce data replication between buckets then your max storage size will be the lowest bucket value. My understanding: enforce the objects (size=3) to be replicated to 3 racks, the moment the rack with 4T OSD's is full we cannot store data anymore.Is this assumption correct?The current idea we play with:- Create 2 rack buckets- Create a ruleset to create 2 object replica’s for the 2x 6T buckets- Create a ruleset to create 1 object replica over all the hosts.This would result in 3 replicas of the object. Where we are sure that 2 objects at least are in different racks. In the unlikely event of a rack failure we would have at least 1 or 2 replica’s left.Our idea is to have a crush rule with config that looks like:Is this the right approach and would this cause limitations in regards of performance or usability? Do you have suggestions?Another interesting situation we have now is: We are going to move the hardware to new locations next year, the rack layout will change and thus the crush map will be altered. When changing a CRUSH map that theoretically would change the 2x 6T racks into 4 racks, would we need to take any special actions into consideration?Thank you for your answers, they are much appreciated!Rogier DikkesSystem Programmer Hadoop & HPC CloudSURFsara | Science Park 140 | 1098 XG Amsterdam
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com