Gerry Reno wrote:
Krishna and Anand:
Hope these will help. Here are my configs:
================================================
### file: glusterfs-server.vol
### GRP: this files goes on all storage bricks
##############################################
### GlusterFS Server Volume Specification ##
##############################################
#### CONFIG FILE RULES:
### "#" is comment character.
### - Config file is case sensitive
### - Options within a volume block can be in any order.
### - Spaces or tabs are used as delimitter within a line.
### - Multiple values to options will be : delimitted.
### - Each option should end within a line.
### - Missing or commented fields will assume default values.
### - Blank/commented lines are allowed.
### - Sub-volumes should already be defined above before referring.
### Export volume "brick" with the contents of "/home/export"
directory.
volume brick
type storage/posix # POSIX FS translator
option directory /home/vmail/mailbrick # Export this directory
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option ibv-send-work-request-size 131072
# option ibv-send-work-request-count 64
# option ibv-recv-work-request-size 131072
# option ibv-recv-work-request-count 64
# option transport-type ib-sdp/server # For Infiniband transport
# option transport-type ib-verbs/server # For ib-verbs transport
# option bind-address 127.0.0.1 # Default is to listen on all
interfaces
option listen-port 6996 # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes brick
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
# option auth.ip.brick.allow * # Allow full access to
"brick" volume
# option auth.ip.brick.allow 192.168.* # Allow subnet access to
"brick" volume
option auth.ip.brick.allow 127.0.0.1,192.168.1.220,192.168.1.221 #
Allow access to "brick" volume
end-volume
================================================
================================================
### file: glusterfs-client.vol
### GRP: this file goes on every client node in cluster
##############################################
### GlusterFS Client Volume Specification ##
##############################################
#### CONFIG FILE RULES:
### "#" is comment character.
### - Config file is case sensitive
### - Options within a volume block can be in any order.
### - Spaces or tabs are used as delimitter within a line.
### - Each option should end within a line.
### - Missing or commented fields will assume default values.
### - Blank/commented lines are allowed.
### - Sub-volumes should already be defined above before referring.
### Add client feature and declare local subvolume
volume client-local
type storage/posix
option directory /home/vmail/mailbrick
end-volume
### Add client feature and attach to remote subvolume
volume client1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.1.220 # IP address of the remote
brick
option remote-port 6996 # default server port is 6996
option remote-subvolume brick # name of the remote volume
end-volume
volume client2
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.221
option remote-port 6996
option remote-subvolume brick
end-volume
#volume bricks
# type cluster/unify
# subvolumes *
# option scheduler nufa
# # does this brick name must be in local server.vol
# option nufa.local-volume-name brick # note 'brick' is singular
#end-volume
### Add automatic file replication (AFR) feature
volume afr
type cluster/afr
subvolumes client1 client2
# option replicate:*.html 2
# option replicate:*.db 5
## ok, this would be RAID-1 on 2 nodes
# option replicate:* 2
## so how would you say RAID-1 on all nodes? with * ?
# option replicate *:2
# option replicate client1,client2:2
# option replicate is no longer supported:
http://www.mail-archive.com/gluster-devel@xxxxxxxxxx/msg02201.html
# pattern-matching translator will be provided later in 1.4
end-volume
### Add writeback feature
#volume writeback
# type performance/write-behind
# option aggregate-size 131072 # unit in bytes
# subvolumes client #end-volume
### Add readahead feature
#volume readahead
# type performance/read-ahead
# option page-size 65536 # unit in bytes
# option page-count 16 # cache per file = (page-count x
page-size)
# subvolumes writeback
#end-volume
================================================
Regards,
Gerry
And here is how I mount the client (from /etc/fstab):
/usr/local/etc/glusterfs/glusterfs-client.vol /home/vmail/mailstore
glusterfs defaults 0 0
Regards,
Gerry