On 05/11/2016 03:01 PM, Nicola Petracchi wrote:
Hi, thx for the quick reply
I have this configuration
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
Linux 3.13.0-86-generic #130-Ubuntu SMP
ppa:gluster/glusterfs-3.7
ii glusterfs-client 3.7.11-ubuntu1~trusty1
amd64 clustered file-system (client package)
ii glusterfs-common 3.7.11-ubuntu1~trusty1
amd64 GlusterFS common libraries and translator
modules
ii glusterfs-server 3.7.11-ubuntu1~trusty1
amd64 clustered file-system (server package)
Okay so you are running 3.7.11.
1. Was your volume created when you were running an older version?
2.What is the glusterd op-version on all nodes? (`cat
/var/lib/glusterd/glusterd.info|grep operating-version`)
3.Does /var/lib/glusterd/vols/gvo10/info have 'arbiter_count=1' ?
-Ravi
N
2016-05-11 10:57 GMT+02:00 Ravishankar N <ravishankar@xxxxxxxxxx>:
On 05/11/2016 02:17 PM, Nicola Petracchi wrote:
Hello, I have deployed a gluster configuration of this type:
Number of Bricks: 1 x (2 + 1) = 3
2 data + 1 arbiter
After deploy, all machines were reporting the correct volume info
informations:
Volume Name: gvol0
Type: Replicate
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: pc01:/var/lib/gvol0/brick1
Brick2: pc02:/var/lib/gvol0/brick2
Brick3: pcgw:/var/lib/gvol0/brickgw (arbiter)
Then, I had to restart gluster service on brick2, after that, only on
that machine the volume info was:
Volume Name: gvol0
Type: Replicate
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: pc01:/var/lib/gvol0/brick1
Brick2: pc02:/var/lib/gvol0/brick2
Brick3: pcgw:/var/lib/gvol0/brickgw
others nodes were still reporting the original volume info informations.
This thing appears strange to me, since I suppose all the members of a
cluster should be consistent on output of informations of a "shared"
resource.
Btw, later on I had to reboot the arbiter server and again, on that
server:
Volume Name: gvol0
Type: Replicate
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: pc01:/var/lib/gvol0/brick1
Brick2: pc02:/var/lib/gvol0/brick2
Brick3: pcgw:/var/lib/gvol0/brickgw
Only one node, the one never rebooted is stating that brick3 is an
arbiter, aside this information the arbiter node is working as an
arbiter since the data folder contains empty files and is updated with
the others.
Is this a situation to solve? an issue?
What version of gluster are you running? This was bug that got fixed
(http://review.gluster.org/12479) in 3.7.7. I would advise you to use the
latest release 3.7.11 if you are trying out arbiter.
Hope this helps,
Ravi
Considerations are appreciated.
Regards
N
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users