We're moving to an HA solution in my SaaS environment, and I'm looking at using Gluster 2.09 (32 bit hardware) and CentOS 5.5.
This is up and running as such:
gluster1 -- data server
gluster2 -- data server
app1 -- app server (mounts the glusterfs in /home)
app2 -- app server (ditto)
Here's my question.
Test 1:
create /home/foo and add 10 files on app1
ls /home/foo on app2 -- I see them
ls /data/export/foo on gluster1/gluster2 -- I see them
Test2:
rm -rf /home/foo on app2
ls on the other 3. The directory is gone
Test3:
create /home/foo and add 5 files on app1
shutdown gluster2
add 5 more files
startup gluster2
ls /data/export/foo on app1/app2/gluster1 -- I see 10 files
ls /data/export/foo on gluster2 -- I see only the 5 files created when the server was up.
How is the failover/replication supposed to work in the situation that one of the backend RAID1 servers goes down?
config files attached
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export
volume posix1
type storage/posix
option directory /data/export
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export
volume posix1
type storage/posix
option directory /data/export
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
## file auto generated by /usr/local/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export
# RAID 1
# TRANSPORT-TYPE tcp
volume gluster1.int.journyx.com-1
type protocol/client
option transport-type tcp
option remote-host 192.168.100.71
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume gluster2.int.journyx.com-1
type protocol/client
option transport-type tcp
option remote-host 192.168.100.72
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume mirror-0
type cluster/replicate
subvolumes gluster1.int.journyx.com-1 gluster2.int.journyx.com-1
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes mirror-0
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes writebehind
end-volume
volume iocache
type performance/io-cache
option cache-size 1GB
option cache-timeout 1
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes quickread
end-volume