On Tue, Jul 12, 2016 at 1:43 PM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:
Alternatively you can replace 4 selected bricks on the first 3 nodes with the 4 disks on the new machine. Now you have 4 bricks that can be reused. Form extra 2 replica sets with 3 bricks each and you are done.
2016-07-12 9:46 GMT+02:00 Anuradha Talur <atalur@xxxxxxxxxx>:
> Yes you can add single node with 3 bricks. But, given that you are keeping the replica count
> same, these three bricks will be replica of each other. It is not so useful in case of node
> failures/shutdown.
So, the only way to grow a replica 3 cluster is to add 3 nodes at once?
This is expensive, about this Ceph is cheaper as I can add a single
OSD node and Ceph automatically rebalance everything and still keep
redundancy
Alternatively you can replace 4 selected bricks on the first 3 nodes with the 4 disks on the new machine. Now you have 4 bricks that can be reused. Form extra 2 replica sets with 3 bricks each and you are done.
Example:
You have S1, S2, S3 and you added S4
Lets say the bricks in replica pairs are:
(s1b1, s2b1, s3b1)
(s1b2, s2b2, s3b2)
(s1b3, s2b3, s3b3)
(s1b4, s2b4, s3b4)
(s1b5, s2b5, s3b5)
(s1b6, s2b6, s3b6)
(s1b2, s2b2, s3b2)
(s1b3, s2b3, s3b3)
(s1b4, s2b4, s3b4)
(s1b5, s2b5, s3b5)
(s1b6, s2b6, s3b6)
Lets say the new bricks are: s4b1, s4b2... s4b6
Now you do replace brick of
s1b1 -> s4b1
s2b2 -> s4b2
s1b1 -> s4b1
s2b2 -> s4b2
s3b3 -> s4b3
s1b4 -> s4b4
Now do erase the old bricks that we replaced i.e. s1b1, s2b2, s3b3, s1b4
add-brick of
(s1b1, s2b2, s4b5)
(s1b1, s2b2, s4b5)
(s1b4, s3b3, s4b6)
do rebalance.
I didn't think too much about how to optimize data movement yet. But I am just offering an alternative to the traditional way to add new bricks. I am not sure if this is incorporated in heketi project yet which lets users only care about nodes and not at the bricks level. I guess with heketi, all you need to say is, here take this new machine S4 and it will(should) take care of all this for the users.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users