Hi Atin,
System is the embedded system and these dates are before the system get in timer sync.
System is the embedded system and these dates are before the system get in timer sync.
I have some questions:
1. based on the logs can we find out the reason for having two peers files with same contents.
2. is there any way to do it from gluster code.
Regards,
Abhishek
On Mon, Nov 21, 2016 at 9:52 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
atin@dhcp35-96:~/Downloads/In board 2500 look at the date of the file 5be8603b-18d0-4333-8590-gluster_users/abhishek_dup_ uuid/duplicate_uuid/glusterd_ 2500/peers$ ls -lrt
total 8
-rw-------. 1 atin wheel 71 Jan 1 1970 5be8603b-18d0-4333-8590-38f918a22857
-rw-------. 1 atin wheel 71 Nov 18 03:31 26ae19a6-b58f-446a-b079-411d4ee57450 38f918a22857 (marked in bold). Not sure how did you end up having this file in such time stamp. I am guessing this could be because of the set up been not cleaned properly at the time of re-installation. Here is the steps what I'd recommend for now:1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to 5be8603b-18d0-4333-8590- 38f918a22857, you should have only one entry in the peers folder in board 2500. 2. Bring down both glusterd instances3. Bring back one by oneAnd then restart glusterd to see if the issue persists.--On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:Hope you will see in the logs......--On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:So, they will remain as same as previous.Hi Atin,It is not getting wipe off we have changed the configuration path from /var/lib/glusterd to /system/glusterd.--On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:Abhishek,rebooting the board does wipe of /var/lib/glusterd contents in your set up right (as per my earlier conversation with you) ? In that case, how are you ensuring that the same node gets back the older UUID? If you don't then this is bound to happen.On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:______________________________Also, we are not replacing any board from setup just rebooting.So could you please check what is the reason to get in this situation as it is very frequent in multiple case.I am attaching all logs from both the boards and the command outputs as well.We are having the setup of replicate volume setup with two brick but after restarting the second board I am getting the duplicate entry in "gluster peer status" command like below:Hi Team,Please lookinto this problem as this is very widely seen problem in our system.
# gluster peer status
Number of Peers: 2
Hostname: 10.32.0.48
Uuid: 5be8603b-18d0-4333-8590-38f918a22857
State: Peer in Cluster (Connected)
Hostname: 10.32.0.48
Uuid: 5be8603b-18d0-4333-8590-38f918a22857
State: Peer in Cluster (Connected)
#
--
Regards
Abhishek Paliwal
_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--~ Atin (atinm)
Regards
Abhishek Paliwal
Regards
Abhishek Paliwal
~ Atin (atinm)
--
Regards
Abhishek Paliwal
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users