Best Regards,
Strahil Nikolov
On Tue, Feb 15, 2022 at 14:28, Diego Zuccato<diego.zuccato@xxxxxxxx> wrote:Hello all.I'm experimenting with thin-arbiter and getting disappointing results.I have 3 hosts in the trusted pool:root@nas1:~# gluster --versionglusterfs 9.2[...]root@nas1:~# gluster pool listUUID Hostname Stated4791fed-3e6d-4f8f-bdb6-4e0043610ead nas3 Connectedbff398f0-9d1d-4bd0-8a47-0bf481d1d593 nas2 Connected4607034c-919d-4675-b5fc-14e1cad90214 localhost ConnectedWhen I try to create a new volume, the first initialization succeeds:root@nas1:~# gluster v create Bck replica 2 thin-arbiter 1nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bckvolume create: Bck: success: please start the volume to access dataBut adding a second brick segfaults the daemon:root@nas1:~# gluster v add-brick Bck nas{1,3}:/bricks/01/BckConnection failed. Please check if gluster daemon is operational.After erroring out, systemctl status glusterd reports daemon in"restarting" state and it eventually restarts. But the new brick is notadded to the volume, even if trying to re-add it yelds a "brick isalready part of a volume" error. Seems glusterd crashes between markingbrick dir as used and recording its data in the config.If I try to add all the bricks during the creation, glusterd does notdie but the volume doesn't get created:root@nas1:~# rm -rf /bricks/{00..07}/Bck && mkdir /bricks/{00..07}/Bckroot@nas1:~# gluster v create Bck replica 2 thin-arbiter 1nas{1,3}:/bricks/00/Bck nas{1,3}:/bricks/01/Bck nas{1,3}:/bricks/02/Bcknas{1,3}:/bricks/03/Bck nas{1,3}:/bricks/04/Bck nas{1,3}:/bricks/05/Bcknas{1,3}:/bricks/06/Bck nas{1,3}:/bricks/07/Bck nas2:/bricks/arbiter/Bckvolume create: Bck: failed: Commit failed on localhost. Please check thelog file for more details.Couldn't find anything useful in the logs :(If I create a "replica 3 arbiter 1" over the same brick directories(just adding some directories to keep arbiters separated), it succeeds:root@nas1:~# gluster v create Bck replica 3 arbiter 1nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck/00volume create: Bck: success: please start the volume to access dataroot@nas1:~# for T in {01..07}; do gluster v add-brick Bcknas{1,3}:/bricks/$T/Bck nas2:/bricks/arbiter/Bck/$T ; donevolume add-brick: successvolume add-brick: successvolume add-brick: successvolume add-brick: successvolume add-brick: successvolume add-brick: successvolume add-brick: successroot@nas1:~# gluster v start Bckvolume start: Bck: successroot@nas1:~# gluster v info BckVolume Name: BckType: Distributed-ReplicateVolume ID: 4786e747-8203-42bf-abe8-107a50b238eeStatus: StartedSnapshot Count: 0Number of Bricks: 8 x (2 + 1) = 24Transport-type: tcpBricks:Brick1: nas1:/bricks/00/BckBrick2: nas3:/bricks/00/BckBrick3: nas2:/bricks/arbiter/Bck/00 (arbiter)Brick4: nas1:/bricks/01/BckBrick5: nas3:/bricks/01/BckBrick6: nas2:/bricks/arbiter/Bck/01 (arbiter)Brick7: nas1:/bricks/02/BckBrick8: nas3:/bricks/02/BckBrick9: nas2:/bricks/arbiter/Bck/02 (arbiter)Brick10: nas1:/bricks/03/BckBrick11: nas3:/bricks/03/BckBrick12: nas2:/bricks/arbiter/Bck/03 (arbiter)Brick13: nas1:/bricks/04/BckBrick14: nas3:/bricks/04/BckBrick15: nas2:/bricks/arbiter/Bck/04 (arbiter)Brick16: nas1:/bricks/05/BckBrick17: nas3:/bricks/05/BckBrick18: nas2:/bricks/arbiter/Bck/05 (arbiter)Brick19: nas1:/bricks/06/BckBrick20: nas3:/bricks/06/BckBrick21: nas2:/bricks/arbiter/Bck/06 (arbiter)Brick22: nas1:/bricks/07/BckBrick23: nas3:/bricks/07/BckBrick24: nas2:/bricks/arbiter/Bck/07 (arbiter)Options Reconfigured:cluster.granular-entry-heal: onstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: onperformance.client-io-threads: offDoes thin arbiter support just one replica of bricks?--Diego ZuccatoDIFA - Dip. di Fisica e AstronomiaServizi InformaticiAlma Mater Studiorum - Università di BolognaV.le Berti-Pichat 6/2 - 40127 Bologna - Italytel.: +39 051 20 95786________Community Meeting Calendar:Schedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCGluster-users mailing list
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users