Hi Atin,
It is not getting wipe off we have changed the configuration path from /var/lib/glusterd to /system/glusterd.On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
Abhishek,rebooting the board does wipe of /var/lib/glusterd contents in your set up right (as per my earlier conversation with you) ? In that case, how are you ensuring that the same node gets back the older UUID? If you don't then this is bound to happen.On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:______________________________Also, we are not replacing any board from setup just rebooting.So could you please check what is the reason to get in this situation as it is very frequent in multiple case.I am attaching all logs from both the boards and the command outputs as well.We are having the setup of replicate volume setup with two brick but after restarting the second board I am getting the duplicate entry in "gluster peer status" command like below:Hi Team,Please lookinto this problem as this is very widely seen problem in our system.
# gluster peer status
Number of Peers: 2
Hostname: 10.32.0.48
Uuid: 5be8603b-18d0-4333-8590-38f918a22857
State: Peer in Cluster (Connected)
Hostname: 10.32.0.48
Uuid: 5be8603b-18d0-4333-8590-38f918a22857
State: Peer in Cluster (Connected)
#
--
Regards
Abhishek Paliwal
_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--~ Atin (atinm)
--
Regards
Abhishek Paliwal
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel