Re: What a brick is missing in `sudo gluster volume status`?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Here are the respective IP addresses of both servers. Why should I
remove "auth.allow: 172.17.*.*"? (And how to remove it?)

pengy@rigel:~$ ifconfig |grep -A 7 '^br1'
br1       Link encap:Ethernet  HWaddr c8:1f:66:e2:90:45
          inet addr:172.17.1.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::ca1f:66ff:fee2:9045/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:312191 errors:0 dropped:0 overruns:0 frame:0
          TX packets:210807 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3741197826 (3.7 GB)  TX bytes:25954291 (25.9 MB)
pengy@betelgeuse:~$  ifconfig |grep -A 7 '^br1'
br1       Link encap:Ethernet  HWaddr c8:1f:66:df:01:0b
          inet addr:172.17.2.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::ca1f:66ff:fedf:10b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:197382 errors:0 dropped:0 overruns:0 frame:0
          TX packets:90443 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:11914450 (11.9 MB)  TX bytes:10016451 (10.0 MB)


Here are are the firewall information. I don't see anything is wrong.
Do you see anything wrong? Thanks.

pengy@rigel:~$ sudo ufw app list
Available applications:
  OpenSSH
pengy@rigel:~$ sudo ufw status
Status: inactive
pengy@rigel:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             192.168.122.200      state
NEW,RELATED,ESTABLISHED tcp dpt:ssh
ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate
RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere
ACCEPT     all  --  anywhere             anywhere
REJECT     all  --  anywhere             anywhere
reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere
reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

pengy@betelgeuse:~$ sudo ufw app list
Available applications:
  OpenSSH
pengy@betelgeuse:~$ sudo ufw status
Status: inactive
pengy@betelgeuse:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate
RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere
ACCEPT     all  --  anywhere             anywhere
REJECT     all  --  anywhere             anywhere
reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere
reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination


On Sat, Mar 22, 2014 at 2:01 PM, Carlos Capriotti
<capriotti.carlos@xxxxxxxxx> wrote:
> One thing that caught my eyes:
>
> auth.allow: 172.17.*.*
>
> Can you remove that, restart glusterd/the nodes and try again ?
>
> Also, do you have firewall/iptables rules enabled ? If yes, consider testing
> with iptables/firewall disabled.
>
>
>
>
> On Sat, Mar 22, 2014 at 7:09 PM, Peng Yu <pengyu.ut@xxxxxxxxx> wrote:
>>
>> Hi,
>>
>> There should be two bricks in the volume "gv". But `sudo gluster
>> volume status` does not show `betelgeuse:/mnt/raid6/glusterfs_export`.
>> Does anybody know what is wrong with this? Thanks.
>>
>> pengy@rigel:~$ sudo gluster volume status
>> Status of volume: gv
>> Gluster process                        Port    Online    Pid
>>
>> ------------------------------------------------------------------------------
>> Brick rigel:/mnt/raid6/glusterfs_export            49152    Y    38971
>> NFS Server on localhost                    N/A    N    N/A
>> Self-heal Daemon on localhost                N/A    N    N/A
>>
>> There are no active volume tasks
>> pengy@rigel:~$ sudo gluster volume info
>>
>> Volume Name: gv
>> Type: Replicate
>> Volume ID: 64754d6c-3736-41d8-afb5-d8071a6a6a07
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: rigel:/mnt/raid6/glusterfs_export
>> Brick2: betelgeuse:/mnt/raid6/glusterfs_export
>> Options Reconfigured:
>> auth.allow: 172.17.*.*
>>
>> --
>> Regards,
>> Peng
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>



-- 
Regards,
Peng
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux