Self/Healing process after node maintenance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I just want to ensure myself how self-healing process exactly works, because I need to turn one of my nodes down for maintenance.
I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster volume.

I want to turn off node1 for maintenance. If I will migrate all VMs to node2 and node3 and shutdown node1, I suppose everything will be running without downtime. (2 nodes of 3 will be online)

My question is if I will start up node1 after maintenance and node1 will be done back online in running state, this will trigger self-healing process on all disk files of all VMs.. will this healing process be only and only on node1?
Can node2 and node3 run VMs without problem while node1 will be healing these files? I want to ensure myself this files (VM disks) will not get “locked” on node2 and node3 while self-healing will be in process on node1. 

Thanks for clarification in advance.

BR!
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux