On Tue, Mar 15, 2016 at 11:10 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
Yes these logs from Board B after reboot. Could you please explain me the line number where you are seeing that glusterd has restored value from the disk files.
On 03/15/2016 10:54 AM, ABHISHEK PALIWAL wrote:
> Hi Atin,
>
> Is these files are ok? or you need some other files.
I just started going through the log files you shared. I've few
questions for you looking at the log:
1. Are you sure the log what you have provided from board B is post a
reboot? If you claim that a reboot wipes of /var/lib/glusterd/ then why
am I seeing that glusterd has restored value from the disk files?
Yes these logs from Board B after reboot. Could you please explain me the line number where you are seeing that glusterd has restored value from the disk files.
2. From the content of glusterd configurations which you shared earlier
the peer UUIDs are 4bf982c0-b21b-415c-b870-e72f36c7f2e7,
4bf982c0-b21b-415c-b870-e72f36c7f2e7 002500/glusterd/peers &
c6b64e36-76da-4e98-a616-48e0e52c7006 from 000300/glusterd/peers. They
don't even exist in glusterd.log.
Somehow I have a feeling that the sequence of log and configurations
files you shared don't match!
There is two UUID file present in 002500/glusterd/peers
1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7
Content of this file is:
uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7
state=10
hostname1=10.32.0.48
I have a question from where this UUID is coming?
2. 98a28041-f853-48ac-bee0-34c592eeb827
Content of this file is:
uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to 000300(10.32.0.48) board you can check this in both of the glusterd log file
state=4 //what this state field display in this file?
hostname1=10.32.0.48
1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7
Content of this file is:
uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7
state=10
hostname1=10.32.0.48
I have a question from where this UUID is coming?
2. 98a28041-f853-48ac-bee0-34c592eeb827
Content of this file is:
uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to 000300(10.32.0.48) board you can check this in both of the glusterd log file
state=4 //what this state field display in this file?
hostname1=10.32.0.48
There is only one UUID file is present on 00030/glusterd/peers
c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the 002500 board before reboot
c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the 002500 board before reboot
content of this file is:
uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID generated by the 002500 board after reboot you can check this as well in glusterd file of 00030 board.
state=3
hostname1=10.32.1.144
uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID generated by the 002500 board after reboot you can check this as well in glusterd file of 00030 board.
state=3
hostname1=10.32.1.144
~Atin
>
> Regards,
> Abhishek
>
> On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL
> <abhishpaliwal@xxxxxxxxx <mailto:abhishpaliwal@xxxxxxxxx>> wrote:
>
> You mean etc*-glusterd-*.log file from both of the boards?
>
> if yes please find the attachment for the same.
>
> On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee <amukherj@xxxxxxxxxx
> <mailto:amukherj@xxxxxxxxxx>> wrote:
>
>
>
> On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:
> > I am not getting you which glusterd directory you are asking. if you are
> > asking about the /var/lib/glusterd directory then which I shared earlier
> > is the same.
> 1. Go to /var/log/glusterfs directory
> 2. Look for glusterd log file
> 3. attach the log
> Do it for both the boards.
> >
> > I have two directories related to gluster
> >
> > 1. /var/log/glusterfs
> > 2./var/lib/glusterd
> >
> > On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee <amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> > <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>> wrote:
> >
> >
> >
> > On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:
> > > I have only these glusterd files available on the nodes
> > Look for etc-*-glusterd*.log in /var/log/glusterfs, that represents the
> > glusterd log file.
> > >
> > > Regards,
> > > Abhishek
> > >
> > > On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee <amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>
> > > <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>>> wrote:
> > >
> > >
> > >
> > > On 03/14/2016 02:18 PM, ABHISHEK PALIWAL wrote:
> > > >
> > > >
> > > > On Mon, Mar 14, 2016 at 12:12 PM, Atin Mukherjee
> <amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>
> > <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>>
> > > > <mailto:amukherj@xxxxxxxxxx
> <mailto:amukherj@xxxxxxxxxx> <mailto:amukherj@xxxxxxxxxx
> <mailto:amukherj@xxxxxxxxxx>>
> > <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>
> <mailto:amukherj@xxxxxxxxxx <mailto:amukherj@xxxxxxxxxx>>>>> wrote:
> > > >
> > > >
> > > >
> > > > On 03/14/2016 10:52 AM, ABHISHEK PALIWAL wrote:
> > > > > Hi Team,
> > > > >
> > > > > I am facing some issue with peer status and
> because of
> > that
> > > remove-brick
> > > > > on replica volume is getting failed.
> > > > >
> > > > > Here. is the scenario what I am doing with
> gluster:
> > > > >
> > > > > 1. I have two boards A & B and gluster is
> running on
> > both of
> > > the boards.
> > > > > 2. On board I have created a replicated
> volume with one
> > > brick on each
> > > > > board.
> > > > > 3. Created one glusterfs mount point where
> both of
> > brick are
> > > mounted.
> > > > > 4. start the volume with nfs.disable=true.
> > > > > 5. Till now everything is in sync between
> both of bricks.
> > > > >
> > > > > Now when I manually plug-out the board B
> from the slot and
> > > plug-in it again.
> > > > >
> > > > > 1. After bootup the board B I have started
> the glusted on
> > > the board B.
> > > > >
> > > > > Following are the some gluster command
> output on Board B
> > > after the step 1.
> > > > >
> > > > > # gluster peer status
> > > > > Number of Peers: 2
> > > > >
> > > > > Hostname: 10.32.0.48
> > > > > Uuid: f4ebe3c5-b6a4-4795-98e0-732337f76faf
> > > > > State: Accepted peer request (Connected)
> > > > >
> > > > > Hostname: 10.32.0.48
> > > > > Uuid: 4bf982c0-b21b-415c-b870-e72f36c7f2e7
> > > > > State: Peer is connected and Accepted
> (Connected)
> > > > >
> > > > > Why this peer status is showing two peer with
> > different UUID?
> > > > GlusterD doesn't generate a new UUID on init
> if it has
> > already
> > > generated
> > > > an UUID earlier. This clearly indicates that
> on reboot
> > of board B
> > > > content of /var/lib/glusterd were wiped off.
> I've asked this
> > > question to
> > > > you multiple times that is it the case?
> > > >
> > > >
> > > > Yes I am following the same which is mentioned in
> the link:
> > > >
> > > >
> > >
> >
> http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
> > > >
> > > > but why it is showing two peer enteries?
> > > >
> > > > >
> > > > > # gluster volume info
> > > > >
> > > > > Volume Name: c_glusterfs
> > > > > Type: Replicate
> > > > > Volume ID: c11f1f13-64a0-4aca-98b5-91d609a4a18d
> > > > > Status: Started
> > > > > Number of Bricks: 1 x 2 = 2
> > > > > Transport-type: tcp
> > > > > Bricks:
> > > > > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> > > > > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
> > > > > Options Reconfigured:
> > > > > performance.readdir-ahead: on
> > > > > network.ping-timeout: 4
> > > > > nfs.disable: on
> > > > > # gluster volume heal c_glusterfs info
> > > > > c_glusterfs: Not able to fetch volfile from
> glusterd
> > > > > Volume heal failed.
> > > > > # gluster volume status c_glusterfs
> > > > > Status of volume: c_glusterfs
> > > > > Gluster process
> TCP Port
> > RDMA Port
> > > > Online
> > > > > Pid
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > >
> > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick
> N/A N/A
> > > N
> > > > > N/A
> > > > > Self-heal Daemon on localhost
> N/A N/A
> > > Y
> > > > > 3922
> > > > >
> > > > > Task Status of Volume c_glusterfs
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > >
> > > > > There are no active volume tasks
> > > > > --
> > > > >
> > > > > At the same time Board A have the following
> gluster
> > commands
> > > outcome:
> > > > >
> > > > > # gluster peer status
> > > > > Number of Peers: 1
> > > > >
> > > > > Hostname: 10.32.1.144
> > > > > Uuid: c6b64e36-76da-4e98-a616-48e0e52c7006
> > > > > State: Peer in Cluster (Connected)
> > > > >
> > > > > Why it is showing the older UUID of host
> 10.32.1.144
> > when this
> > > > UUID has
> > > > > been changed and new UUID is
> > > 267a92c3-fd28-4811-903c-c1d54854bda9
> > > > >
> > > > >
> > > > > # gluster volume heal c_glusterfs info
> > > > > c_glusterfs: Not able to fetch volfile from
> glusterd
> > > > > Volume heal failed.
> > > > > # gluster volume status c_glusterfs
> > > > > Status of volume: c_glusterfs
> > > > > Gluster process
> TCP Port
> > RDMA Port
> > > > Online
> > > > > Pid
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > >
> > > > > Brick 10.32.0.48:/opt/lvmdir/c2/brick
> 49169 0
> > > Y
> > > > > 2427
> > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick
> N/A N/A
> > > N
> > > > > N/A
> > > > > Self-heal Daemon on localhost
> N/A N/A
> > > Y
> > > > > 3388
> > > > > Self-heal Daemon on 10.32.1.144
> N/A N/A
> > > Y
> > > > > 3922
> > > > >
> > > > > Task Status of Volume c_glusterfs
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > >
> > > > > There are no active volume tasks
> > > > >
> > > > > As you see in the "gluster volume status"
> showing that
> > Brick
> > > > > "10.32.1.144:/opt/lvmdir/c2/brick " is
> offline so We have
> > > tried to
> > > > > remove it but getting "volume remove-brick
> c_glusterfs
> > replica 1
> > > > > 10.32.1.144:/opt/lvmdir/c2/brick force :
> FAILED :
> > Incorrect
> > > brick
> > > > > 10.32.1.144:/opt/lvmdir/c2/brick for volume
> c_glusterfs"
> > > error on the
> > > > > Board A.
> > > > >
> > > > > Please reply on this post because I am
> always getting
> > this error
> > > > in this
> > > > > scenario.
> > > > >
> > > > > For more detail I am also adding the logs of
> both of the
> > > board which
> > > > > having some manual created file in which you
> can find the
> > > output of
> > > > > glulster command from both of the boards
> > > > >
> > > > > in logs
> > > > > 00030 is board A
> > > > > 00250 is board B.
> > > > This attachment doesn't help much. Could you
> attach full
> > > glusterd log
> > > > files from both the nodes?
> > > > >
> > > >
> > > > inside this attachment you will found full
> glusterd log file
> > > > 00300/glusterd/ and 002500/glusterd/
> > > No, that contains the configuration files.
> > > >
> > > > > Thanks in advance waiting for the reply.
> > > > >
> > > > > Regards,
> > > > > Abhishek
> > > > >
> > > > >
> > > > > Regards
> > > > > Abhishek Paliwal
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Gluster-devel mailing list
> > > > > Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>
> > <mailto:Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>>
> <mailto:Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> > <mailto:Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>>>
> > > <mailto:Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>
> > <mailto:Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>>
> <mailto:Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> > <mailto:Gluster-devel@xxxxxxxxxxx
> <mailto:Gluster-devel@xxxxxxxxxxx>>>>
> > > > >
> http://www.gluster.org/mailman/listinfo/gluster-devel
> > > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > >
> > > >
> > > > Regards
> > > > Abhishek Paliwal
> > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > >
> > >
> > > Regards
> > > Abhishek Paliwal
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
--
Regards
Abhishek Paliwal
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel