Re: Glusterfs Iptbale confusion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Great thanks Joe

Q) And about the second doubt I had , I have 4 GlusterFS servers i am using 1 brick from every server to form a replicated distributed storage with 4 bricks (one from every server) and mounting it on the client . So in this case on my server should I keep ports 49152-49153 open (as I am using one brick from every server to form the storage) or should i keep 49152-49155 open on the server as my final storage that i mount on the client has 4 bricks in all(one from every server),

The Glusterfs document states it should be 49152+(number of bricks across all volumes) I am finding it difficult to understand this.

Thanks,
Gauri

On Mon, Nov 17, 2014 at 11:20 AM, Joe Julian <joe@xxxxxxxxxxxxxxxx> wrote:
glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4


On 11/17/2014 09:57 AM, gauri Desai wrote:
Hello list,


 Q1 )  I had a confusion in which iptable ports should be open  while running GlusterFS 3.6.1 on centOS 6.5.
I know that ports 24007, 24008 ,111(tcp + udp) should be open but for the bricks across all volumes should the ports  49152+(number of bricks across all volumes) should be open or should 24009+(number of bricks across all volumes)shuld be open. I know that the iptables rules are different for Glusterfs versions above 3.4.


Q2) Also I am using 4 servers and one client to make a distributed replicated storage suing Glusterfs. From ever server I have sued one brick and thus the storage mounted on the client has 4 bricks . According to this should i keep ports
49152 - 49153 open on every server or 49151-49155 open on every server.(i am using just one brick on every server for the storage)


Would be great if you all could help

Thanks,
Gauri


·  




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux