One thing that caught my eyes:
auth.allow: 172.17.*.*
Can you remove that, restart glusterd/the nodes and try again ?
Also, do you have firewall/iptables rules enabled ? If yes, consider testing with iptables/firewall disabled.
On Sat, Mar 22, 2014 at 7:09 PM, Peng Yu <pengyu.ut@xxxxxxxxx> wrote:
Hi,
There should be two bricks in the volume "gv". But `sudo gluster
volume status` does not show `betelgeuse:/mnt/raid6/glusterfs_export`.
Does anybody know what is wrong with this? Thanks.
pengy@rigel:~$ sudo gluster volume status
Status of volume: gv
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick rigel:/mnt/raid6/glusterfs_export 49152 Y 38971
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A N N/A
There are no active volume tasks
pengy@rigel:~$ sudo gluster volume info
Volume Name: gv
Type: Replicate
Volume ID: 64754d6c-3736-41d8-afb5-d8071a6a6a07
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rigel:/mnt/raid6/glusterfs_export
Brick2: betelgeuse:/mnt/raid6/glusterfs_export
Options Reconfigured:
auth.allow: 172.17.*.*
--
Regards,
Peng
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users