Re: Experimenting with thin-arbiter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not there. It's not one of the defined services :(
Maybe Debian does not support it?

Il 16/02/2022 13:26, Strahil Nikolov ha scritto:
My bad, it should be /gluster-ta-volume.service/

    On Wed, Feb 16, 2022 at 7:45, Diego Zuccato
    <diego.zuccato@xxxxxxxx> wrote:
    No such process is defined. Just the standard glusterd.service and
    glustereventsd.service . Using Debian stable.

    Il 15/02/2022 15:41, Strahil Nikolov ha scritto:
     > Any errors in gluster-ta.service on the arbiter node ?
     >
     > Best Regards,
     > Strahil Nikolov
     >
     >    On Tue, Feb 15, 2022 at 14:28, Diego Zuccato
     >    <diego.zuccato@xxxxxxxx <mailto:diego.zuccato@xxxxxxxx>> wrote:
     >    Hello all.
     >
     >    I'm experimenting with thin-arbiter and getting disappointing
    results.
     >
     >    I have 3 hosts in the trusted pool:
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster --version
     >    glusterfs 9.2
     >    [...]
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster pool list
     >    UUID                                    Hostname        State
     >    d4791fed-3e6d-4f8f-bdb6-4e0043610ead    nas3            Connected
     >    bff398f0-9d1d-4bd0-8a47-0bf481d1d593    nas2            Connected
     >    4607034c-919d-4675-b5fc-14e1cad90214    localhost      Connected
     >
     >    When I try to create a new volume, the first initialization
    succeeds:
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v create Bck replica 2
     >    thin-arbiter 1
     >    nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck
     >    volume create: Bck: success: please start the volume to access
    data
     >
     >    But adding a second brick segfaults the daemon:
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v add-brick Bck
     >    nas{1,3}:/bricks/01/Bck
     >    Connection failed. Please check if gluster daemon is operational.
     >
     >    After erroring out, systemctl status glusterd reports daemon in
     >    "restarting" state and it eventually restarts. But the new
    brick is not
     >    added to the volume, even if trying to re-add it yelds a "brick is
     >    already part of a volume" error. Seems glusterd crashes
    between marking
     >    brick dir as used and recording its data in the config.
     >
     >    If I try to add all the bricks during the creation, glusterd
    does not
     >    die but the volume doesn't get created:
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# rm -rf /bricks/{00..07}/Bck && mkdir
     >    /bricks/{00..07}/Bck
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v create Bck replica 2
     >    thin-arbiter 1
     >    nas{1,3}:/bricks/00/Bck nas{1,3}:/bricks/01/Bck
    nas{1,3}:/bricks/02/Bck
     >    nas{1,3}:/bricks/03/Bck nas{1,3}:/bricks/04/Bck
    nas{1,3}:/bricks/05/Bck
     >    nas{1,3}:/bricks/06/Bck nas{1,3}:/bricks/07/Bck
    nas2:/bricks/arbiter/Bck
     >    volume create: Bck: failed: Commit failed on localhost. Please
    check
     >    the
     >    log file for more details.
     >
     >    Couldn't find anything useful in the logs :(
     >
     >    If I create a "replica 3 arbiter 1" over the same brick
    directories
     >    (just adding some directories to keep arbiters separated), it
    succeeds:
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v create Bck replica 3
     >    arbiter 1
     >    nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck/00
     >    volume create: Bck: success: please start the volume to access
    data
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# for T in {01..07}; do gluster v
     >    add-brick Bck
     >    nas{1,3}:/bricks/$T/Bck nas2:/bricks/arbiter/Bck/$T ; done
     >    volume add-brick: success
     >    volume add-brick: success
     >    volume add-brick: success
     >    volume add-brick: success
     >    volume add-brick: success
     >    volume add-brick: success
     >    volume add-brick: success
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v start Bck
     >    volume start: Bck: success
     > root@nas1 <mailto:root@nas1> <mailto:root@nas1
    <mailto:root@nas1>>:~# gluster v info Bck
     >
     >    Volume Name: Bck
     >    Type: Distributed-Replicate
     >    Volume ID: 4786e747-8203-42bf-abe8-107a50b238ee
     >    Status: Started
     >    Snapshot Count: 0
     >    Number of Bricks: 8 x (2 + 1) = 24
     >    Transport-type: tcp
     >    Bricks:
     >    Brick1: nas1:/bricks/00/Bck
     >    Brick2: nas3:/bricks/00/Bck
     >    Brick3: nas2:/bricks/arbiter/Bck/00 (arbiter)
     >    Brick4: nas1:/bricks/01/Bck
     >    Brick5: nas3:/bricks/01/Bck
     >    Brick6: nas2:/bricks/arbiter/Bck/01 (arbiter)
     >    Brick7: nas1:/bricks/02/Bck
     >    Brick8: nas3:/bricks/02/Bck
     >    Brick9: nas2:/bricks/arbiter/Bck/02 (arbiter)
     >    Brick10: nas1:/bricks/03/Bck
     >    Brick11: nas3:/bricks/03/Bck
     >    Brick12: nas2:/bricks/arbiter/Bck/03 (arbiter)
     >    Brick13: nas1:/bricks/04/Bck
     >    Brick14: nas3:/bricks/04/Bck
     >    Brick15: nas2:/bricks/arbiter/Bck/04 (arbiter)
     >    Brick16: nas1:/bricks/05/Bck
     >    Brick17: nas3:/bricks/05/Bck
     >    Brick18: nas2:/bricks/arbiter/Bck/05 (arbiter)
     >    Brick19: nas1:/bricks/06/Bck
     >    Brick20: nas3:/bricks/06/Bck
     >    Brick21: nas2:/bricks/arbiter/Bck/06 (arbiter)
     >    Brick22: nas1:/bricks/07/Bck
     >    Brick23: nas3:/bricks/07/Bck
     >    Brick24: nas2:/bricks/arbiter/Bck/07 (arbiter)
     >    Options Reconfigured:
     >    cluster.granular-entry-heal: on
     >    storage.fips-mode-rchecksum: on
     >    transport.address-family: inet
     >    nfs.disable: on
     >    performance.client-io-threads: off
     >
     >    Does thin arbiter support just one replica of bricks?
     >
     >    --
     >    Diego Zuccato
     >    DIFA - Dip. di Fisica e Astronomia
     >    Servizi Informatici
     >    Alma Mater Studiorum - Università di Bologna
     >    V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
     >    tel.: +39 051 20 95786
     >    ________
     >
     >
     >
     >    Community Meeting Calendar:
     >
     >    Schedule -
     >    Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
     >    Bridge: https://meet.google.com/cpu-eiue-hvk
    <https://meet.google.com/cpu-eiue-hvk>
     >    <https://meet.google.com/cpu-eiue-hvk
    <https://meet.google.com/cpu-eiue-hvk>>
     >    Gluster-users mailing list
     > Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
    <mailto:Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>>
     > https://lists.gluster.org/mailman/listinfo/gluster-users
    <https://lists.gluster.org/mailman/listinfo/gluster-users>
     >    <https://lists.gluster.org/mailman/listinfo/gluster-users
    <https://lists.gluster.org/mailman/listinfo/gluster-users>>

     >

-- Diego Zuccato
    DIFA - Dip. di Fisica e Astronomia
    Servizi Informatici
    Alma Mater Studiorum - Università di Bologna
    V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
    tel.: +39 051 20 95786


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux