*note this list apparently doesnt change the reply-to? haha i keep having to reforward Ok just unmounted on all 3, did rm -r /home/export/* on all 3, remounted and then tried mkdir one and it does create it on all 3 servers but then mkdir returns with fiel exists. So my guess is that the way i have the afr bricks configured must be some kind of loop? haha not sure how to put it it must be something like a file is created here, then mirrored there, but there is mirrored to other there, which is mirrored back to here? Doh! On 6/5/07, Amar S. Tumballi <amar@xxxxxxxxxxxxx> wrote:
I think you didn't start with an empty export directory when started testing with these config files. (mostly /home/export/ had some entries before hand). That could have caused this problem. Can you check by running the same config, but with empty export directory ? -amar (bulde on #gluster) On 6/6/07, Brandon Lamb <brandonlamb@xxxxxxxxx> wrote: > Haha ok now I have something screwy. Below is my client.vol file. > > Now when I mount the cluster and try to create a directory it comes back with > > [root@dev glusterfs]# ls > one two > [root@dev glusterfs]# mkdir three > mkdir: cannot create directory `three': File exists > [root@dev glusterfs]# ls > one three two > [root@dev glusterfs]# > > Strange! Here is my config > > -------------------------------------------------- > volume a1 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.25 > option remote-port 6996 > option remote-subvolume locks > end-volume > > volume a2 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.25 > option remote-port 6996 > option remote-subvolume locks-afr > end-volume > > volume b1 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.26 > option remote-port 6996 > option remote-subvolume locks > end-volume > > volume b2 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.26 > option remote-port 6996 > option remote-subvolume locks-afr > end-volume > > volume c1 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.75 > option remote-port 6996 > option remote-subvolume locks > end-volume > > volume c2 > type protocol/client > option transport-type tcp/client > option remote-host 208.200.248.75 > option remote-port 6996 > option remote-subvolume locks-afr > end-volume > > volume afr1 > type cluster/afr > subvolumes a1 b1 > option replicate *:2 > end-volume > > volume afr2 > type cluster/afr > subvolumes b2 c1 > option replicate *:2 > end-volume > > volume afr3 > type cluster/afr > subvolumes c2 a1 > option replicate *:2 > end-volume > > volume unify1 > type cluster/unify > subvolumes afr1 afr2 afr3 > option readdir-force-success on > option scheduler rr > option rr.limits.min-free-disk 10GB > end-volume > > volume iothreads > type performance/io-threads > option thread-count 8 > subvolumes unify1 > end-volume > > volume readahead > type performance/read-ahead > option page-size 131072 > option page-count 16 > subvolumes iothreads > end-volume > > volume stat-performance > type performance/stat-prefetch > option cache-seconds 1 > subvolumes readahead > end-volume > -------------------------------------------------- > > -- Amar Tumballi http://amar.80x25.org [bulde on #gluster/irc.gnu.org]