On 05/12/2015 02:15 AM, Christopher
Pereira wrote:
On
11-05-2015 15:40, Christopher Pereira wrote:
There is an arbiter feature for replica
3 volumes
(https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md)
being released in glusterfs 3.7 which would prevent files from
going into split-brains, you could try that out. If the writes
can cause a split-brain, it is failed with an ENOTCONN to the
application.
Seems to be broken, because files are also stored on the 3rd
brick, even when using "replica 3 arbiter 1".
Also, I don't see the .meta dir inside the mount point.
Bricks 1 and 2 are using:
glusterfs-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-rdma-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-cli-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-api-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-server-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-geo-replication-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-libs-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
glusterfs-fuse-3.7.0beta1-0.131.gitf54b232.el7.centos.x86_64
vdsm-gluster-4.17.0-778.git597bb40.el7.noarch
glusterfs-debuginfo-3.7.0beta1-0.155.git72f80ae.el7.centos.x86_64
Brick 3 is using:
glusterfs-fuse-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-geo-replication-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-cli-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-api-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-server-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-rdma-3.8dev-0.58.gitf692757.el7.centos.x86_64
vdsm-gluster-4.17.0-743.gite5856da.el7.noarch
glusterfs-libs-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-debuginfo-3.8dev-0.58.gitf692757.el7.centos.x86_64
Hi Christopher,
The patches for arbiter logic have gone in for 3.7.0beta2. Could you
upgrade your bricks 1 and 2 to that version or to 3.8dev like brick3
and try? It is recommended to have all 3 nodes on the same version.
After creating the volume, you can check the volfile of the 3rd
brick- it must contain the arbiter xlator in the graph. i.e.
/var/lib/glusterd/vols/<volname>/<volname.IP.the_third_brick.vol
must have this entry:
volume <volname>-arbiter
type features/arbiter
subvolumes <volname>-posix
end-volume
Thanks,
Ravi
|
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel