I am using the following config, but replication does not appear to be taking place between the nodes. If I take down the node the file was created on, the other nodes can not see it. My only idea is that once it sees the nufa translator, it does not recognize the other volume types. If this is so, is there any way to have replication with nufa? Anyone have thoughts? Config below: Thanks in advance. -Dave volume posix type storage/posix option directory /data end-volume volume locks type features/locks subvolumes posix end-volume volume brick type performance/io-threads subvolumes locks end-volume volume server type protocol/server option transport-type tcp # option transport.socket.bind-address 192.168.0.2 # Default is to listen on all interfaces option transport.socket.listen-port 6996 option auth.addr.brick.allow 192.168.0.* subvolumes brick end-volume volume ecp1 type protocol/client option transport-type tcp option remote-host ecp1.razorcloud.gfs option remote-subvolume brick end-volume volume ecp2 type protocol/client option transport-type tcp option remote-host ecp2.razorcloud.gfs option remote-subvolume brick end-volume volume ecp3 type protocol/client option transport-type tcp option remote-host ecp3.razorcloud.gfs option remote-subvolume brick end-volume volume nufa type cluster/nufa option local-volume-name `hostname` # note the backquote, so 'hostname' output will be used as the option. # option lookup-unhashed yes subvolumes ecp1 ecp2 ecp3 end-volume volume replication type cluster/replicate subvolumes ecp1 ecp2 ecp3 end-volume volume cluster type cluster/distribute # option lookup-unhashed yes option min-free-disk 20% subvolumes ecp1 ecp2 ecp3 end-volume volume writebehind type performance/write-behind option cache-size 1MB subvolumes nufa end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume On Mon, Jun 15, 2009 at 9:36 AM, Sacerdoti, Federico<Federico.Sacerdoti at deshawresearch.com> wrote: > Hello, > > Can you provide an example of a NUFA+replicate config? I would like to > test its performance on 100 nodes, compared to the distribute+replicate, > which I've already done. > > Thanks, > fds > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >