Thanks Joe for the confirmation, cheers, have a good weekend.
On Thu, Aug 11, 2016 at 5:17 PM, Joe Julian <joe@xxxxxxxxxxxxxxxx> wrote:
Because "start from scratch" means changing hostnames. When you've got a strategy for naming hosts, assigning a hostname that doesn't fit that strategy breaks consistency. When you're managing hosts in 6 datacenters on 4 contenents, consistent naming is critical.On 08/11/2016 02:05 PM, Gandalf Corvotempesta wrote:
Il 11 ago 2016 7:21 PM, "Dan Lavu" <dan@xxxxxxxxxx> ha scritto:
>
> Is it possible? Looking at everything, it just seems like I need the content of the bricks and whatever is in /etc/glusterd and /var/lib/glusterd maintaining the same hostname, IP and the same Gluster version?
>
>Why not start from scratch and let gluster to heal when you add the upgraded node back to the cluster?
Having consistent names makes automated deployment (or redeployment) much easier to code when you're using mgmt, saltstack, ansible, puppet or even chef. This is also the same reason I use consistent naming for brick directories.
Gluster has never written replacement tools with same hostname and path as a possibility meaning that replace-brick doesn't work. Without being able to use replace-brick, self-heal doesn't know to heal that one single brick so heal...full is needed. Replace-brick is being changed to allow in-place replacement and solve that problem. Until then, Dan's process is perfectly reasonable and is the process that I use (more or less, I actually just template glusterd.info and rely on the sync process to fill the rest of /var/lib/glusterd).
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users