Re: peer rejected but connected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for the acknowledgement.

On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz@xxxxxxxxxxx> wrote:
yes, I see things got lost in transit, I said before:

I did from first time and now not rejected.
    now I'm restarting fourth(newly added) peer's glusterd
    and.. it seems to work. <- HERE!  (even though....

and then I asked:

I there anything I should double check & make sure all
    is 100% fine before I use that newly added peer for
    bricks?

below is my full message. Basically, new peers do not get rejected any more.


On 04/09/17 13:56, Gaurav Yadav wrote:

Executing "gluster volume set all cluster.op-version <op-version>"on all the existing nodes will solve this problem.

If issue still persists please  provide me following logs (working-cluster + newly added peer)
1. glusterd.info <http://glusterd.info> file from /var/lib/glusterd from all nodes
2. glusterd.logs from all nodes
3. info file from all the nodes.
4. cmd-history from all the nodes.

Thanks
Gaurav

On Mon, Sep 4, 2017 at 2:09 PM, lejeczek <peljasz@xxxxxxxxxxx <mailto:peljasz@xxxxxxxxxxx>> wrote:

    I do not see, did you write anything?

    On 03/09/17 11:54, Gaurav Yadav wrote:



        On Fri, Sep 1, 2017 at 9:02 PM, lejeczek
        <peljasz@xxxxxxxxxxx <mailto:peljasz@xxxxxxxxxxx>
        <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>> wrote:

            you missed my reply before?
            here:

            now, a "weir" thing

            I did that, still fourth peer rejected, still
        fourth
            probe would fail to restart(all after upping
        to 31004)
            I redone, wiped and re-probed from a different
        peer
            than I did from first time and now not rejected.
            now I'm restarting fourth(newly added) peer's
        glusterd
            and.. it seems to work.(even though
        tier-enabled=0 is
            still there, now on all four peers, was not
        there on
            three before working peers)

            I there anything I should double check & make
        sure all
            is 100% fine before I use that newly added
        peer for
            bricks?

              For this only I need logs to see what has
        gone wrong.


        Please provide me following things
        (working-cluster + newly added peer)
        1. glusterd.info <http://glusterd.info>
        <http://glusterd.info> <http://glusterd.info> file
        from /var/lib/glusterd from all nodes
        2. glusterd.logs from all nodes
                    3. info file from all the nodes.
                    4. cmd-history from all the nodes.


            On 01/09/17 11:11, Gaurav Yadav wrote:

                I replicate the problem locally and with
        the steps
                I suggested you, it worked for me...

                Please provide me following things
                (working-cluster + newly added peer)
                1. glusterd.info <http://glusterd.info>
        <http://glusterd.info>
                <http://glusterd.info> file from
        /var/lib/glusterd
                from all nodes
                2. glusterd.logs from all nodes
                3. info file from all the nodes.
                4. cmd-history from all the nodes.


                On Fri, Sep 1, 2017 at 3:39 PM, lejeczek
                <peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
        <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>>> wrote:

                    Like I said, I upgraded from 3.8 to 3.10 a
                while ago,
                    at the moment 3.10.5, only now with
        3.10.5 I
                tried to
                    add a peer.

                    On 01/09/17 10:51, Gaurav Yadav wrote:

                        What is gluster --version on all
        these nodes?

                        On Fri, Sep 1, 2017 at 3:18 PM,
        lejeczek
                        <peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>>
                        <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>

                        <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>
                <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>>>> wrote:

                            on first node I got
                            $ gluster volume set all
                cluster.op-version 31004
                            volume set: failed: Commit
        failed on
                        10.5.6.49. Please
                            check log file for details.

                            but I immediately proceeded to
                remaining nodes
                        and:

                            $ gluster volume get all
                cluster.op-version
                            Option Value
                            ------ -----
                            cluster.op-version 30712
                            $ gluster volume set all
                cluster.op-version 31004
                            volume set: failed: Required
        op-version
                        (31004) should
                            not be equal or lower than
        current cluster
                        op-version
                            (31004).
                            $ gluster volume get all
                cluster.op-version
                            Option Value
                            ------ -----
                            cluster.op-version           31004

                            last, third node:

                            $ gluster volume get all
                cluster.op-version
                            Option Value
                            ------ -----
                            cluster.op-version 30712
                            $ gluster volume set all
                cluster.op-version 31004
                            volume set: failed: Required
        op-version
                        (31004) should
                            not be equal or lower than
        current cluster
                        op-version
                            (31004).
                            $ gluster volume get all
                cluster.op-version
                            Option Value
                            ------ -----
                            cluster.op-version           31004

                            So, even though it failed as
        above,
                now I see that
                            it's 31004 on all three peers,
        at least
                        according to
                            "volume get all
        cluster.op-version"
                command.


                            On 01/09/17 10:38, Gaurav
        Yadav wrote:

                                gluster volume set all
                cluster.op-version
                        31004











_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux