You can work with the underlying filesystem, say, to fix a problem, but
you'd want to work with it the way GlusterFS would, at least for
consistency. So, if it's a mirror, any change you make on one, you'd want
to reproduce on the other.
With drbd, you could only have one mounted at any given time; otherwise,
even mounting the other one would be something of a catastrophe. Note
that this isn't true of recent, bidirectional drbds, if you run GFS.
Thanks,
Brent
On Wed, 4 Apr 2007, Gerry Reno wrote:
Avati,
Yes, of course, it works. So it is similar to DRBD where you must only
interact via the exposed mounts and never directly to the underlying
subsystem.
Gerry
Anand Avati wrote:
Gerry,
you have to touch via /mnt/glusterfs, not in the backend directly!
avati
On Tue, Apr 03, 2007 at 04:50:22PM -0400, Gerry Reno wrote:
I have not been successful at getting GlusterFS with AFR translator
working on 2 bricks:
=====================
test-server0.vol
=====================
volume brick
type storage/posix # POSIX FS translator
option directory /root/export0 # Export this directory
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6996 # Default is 6996
subvolumes brick
option auth.ip.brick.allow * # Allow full access to "brick" volume
end-volume
=====================
test-server1.vol
=====================
volume brick
type storage/posix # POSIX FS translator
option directory /root/export1 # Export this directory
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6997 # Default is 6996
subvolumes brick
option auth.ip.brick.allow * # Allow full access to "brick" volume
end-volume
=====================
test-client.vol
=====================
### Add client feature and declare local subvolume
volume client1-local
type storage/posix
option directory /root/export0
end-volume
volume client2-local
type storage/posix
option directory /root/export1
end-volume
### Add client feature and attach to remote subvolume
volume client1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 192.168.1.25 # IP address of the remote brick
option remote-port 6996 # default server port is 6996
option remote-subvolume brick # name of the remote volume
end-volume
volume client2
type protocol/client
option transport-type tcp/client
option remote-host 192.168.1.25
option remote-port 6997
option remote-subvolume brick
end-volume
### Add automatice file replication (AFR) feature
volume afr
type cluster/afr
subvolumes client1 client2
option replicate *:2
end-volume
=====================
Servers are started like this:
glusterfsd --spec-file=/usr/local/etc/glusterfs/test-server0.vol
glusterfsd --spec-file=/usr/local/etc/glusterfs/test-server1.vol
Client is started like this:
glusterfs --spec-file=./test-client.vol /mnt/glusterfs/
=====================
[root@grp-01-30-01 glusterfs]# touch /root/export0/file1
wait...
[root@grp-01-30-01 glusterfs]# find /root/export*
/root/export0
/root/export0/file1
/root/export1
=====================
I do not see any replication.
What am I missing?
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel