Thin arbiter daemon on non-thin setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On a brand new Ubuntu 18  Gluster 6.2   replicate 3 arbiter 1 (normal arbiter) setup.

glusterfs-server/bionic,now 6.2-ubuntu1~bionic1 amd64 [installed]
  clustered file-system (server package)

Systemd is degraded and I show this this in systemctl listing

● gluster-ta-volume.service    loaded failed failed    GlusterFS, Thin-arbiter process to maintain quorum for replica volume

systemctl status show this

● gluster-ta-volume.service - GlusterFS, Thin-arbiter process to maintain quorum for replica volume    Loaded: loaded (/lib/systemd/system/gluster-ta-volume.service; enabled; vendor preset: enabled)    Active: failed (Result: exit-code) since Sun 2019-06-16 12:36:15 PDT; 2 days ago   Process: 13020 ExecStart=/usr/sbin/glusterfsd -N --volfile-id ta-vol -f /var/lib/glusterd/thin-arbiter/thin-arbiter.vol --brick-port 24007 --xlator-option ta-vol-server.transport.socket.listen-port=24007 (code=exited, status=255)
 Main PID: 13020 (code=exited, status=255)

Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: gluster-ta-volume.service: Service hold-off time over, scheduling restart. Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: gluster-ta-volume.service: Scheduled restart job, restart counter is at 5. Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: Stopped GlusterFS, Thin-arbiter process to maintain quorum for replica volume. Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: gluster-ta-volume.service: Start request repeated too quickly. Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: gluster-ta-volume.service: Failed with result 'exit-code'. Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: Failed to start GlusterFS, Thin-arbiter process to maintain quorum for replica volume

Since I am not using Thin Arbiter, I am a little confused.

The Gluster setup itself seems fine and seems to work normally.

root@onetest2:/var/log/libvirt/qemu# gluster peer status
Number of Peers: 2

Hostname: onetest1.gluster
Uuid: 79dc67df-c606-42f8-bbee-f7e73c730eb8
State: Peer in Cluster (Connected)

Hostname: onetest3.gluster
Uuid: d4e3330b-eaac-4a54-ad2e-a0da1114ec09
State: Peer in Cluster (Connected)
root@onetest2:/var/log/libvirt/qemu# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 1a80b833-0850-4ddb-83fa-f36da2b7a8fc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: onetest2.gluster:/GLUSTER/gv0
Brick2: onetest3.gluster:/GLUSTER/gv0
Brick3: onetest1.gluster:/GLUSTER/gv0 (arbiter)

Thoughts?

Can I just disable remove that service?

Sincerely,

W Kern




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux