For what it's worth, I've been getting the exact same thing, and one of the most interesting parts is that I am not currently using NFS in any capacity on those servers. I am, however, running Courier, which uses the file system for locking in certain cases. If I find during my investigation that this is actually the source of these problems, I'll be sure to post. Augie On Thu, Aug 19, 2010 at 7:14 PM, Jenn Fountain <jfountai at comcast.net> wrote: > Oh And an added bonus - if I add the third server with an empty directory, > I get the error NFS stale on my first server (the server that does the > writing to the gluster share) > > Thanks for any help! > -Jenn > > > > > On Aug 19, 2010, at 6:31 PM, Jenn Fountain wrote: > > > Oh and > > [root at xx-xx upload]# rpm -qa | grep fuse > > fuse-2.7.4-8.el5 > > fuse-libs-2.7.4-8.el5 > > fuse-libs-2.7.4-8.el5 > > [root at xx-xx xx]# rpm -qa | grep gluster > > glusterfs-common-3.0.5-1 > > glusterfs-server-3.0.5-1 > > glusterfs-client-3.0.5-1 > > > > -Jenn > > > > > > > > > > > > On Aug 19, 2010, at 6:28 PM, Jenn Fountain wrote: > > > >> BTW > >> > >> here is my config > >> > >> ### Add client feature and attach to remote subvolume of server1 > >> volume brick1 > >> type protocol/client > >> option transport-type tcp > >> option remote-host x.x.x.x # IP address of the remote brick > >> option remote-subvolume brick # name of the remote volume > >> end-volume > >> > >> ### Add client feature and attach to remote subvolume of server2 > >> volume brick2 > >> type protocol/client > >> option transport-type tcp > >> option remote-host x.x.x.x # IP address of the remote brick > >> option remote-subvolume brick # name of the remote volume > >> end-volume > >> > >> ### Add client feature and attach to remote subvolume of server2 > >> volume brick3 > >> type protocol/client > >> option transport-type tcp > >> option remote-host x.x.x.x # IP address of the remote brick > >> option remote-subvolume brick # name of the remote volume > >> end-volume > >> > >> #The replicated volume with data > >> volume replicate > >> type cluster/replicate > >> # optionally but useful if most is reading > >> # !!!different values for box a and box b!!! > >> # option read-subvolume remote1 > >> # option read-subvolume remote2 > >> subvolumes brick1 brick2 brick3 > >> end-volume > >> > >> volume writebehind > >> type performance/write-behind > >> option cache-size 4MB > >> subvolumes replicate > >> end-volume > >> > >> volume iocache > >> type performance/io-cache > >> option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed > 's/[^0-9]//g') / 5120 ))`MB > >> option cache-timeout 1 > >> subvolumes writebehind > >> end-volume > >> > >> volume quickread > >> type performance/quick-read > >> option cache-timeout 1 > >> option max-file-size 64kB > >> subvolumes iocache > >> end-volume > >> > >> #volume statprefetch > >> # type performance/stat-prefetch > >> # subvolumes quickread > >> #end-volume > >> > >> > >> Linux CentOS 5.5 no firewall between servers. > >> -Jenn > >> > >> > >> > >> > >> > >> On Aug 19, 2010, at 6:00 PM, Jenn Fountain wrote: > >> > >>> I am seeing this a couple times in the logs of two of my servers: > >>> > >>> [afr.c:107:afr_set_split_brain] replicate: invalid argument: inode > >>> > >>> I am not 100% what this means or what I need to do to "fix". > >>> > >>> Any thoughts? > >>> > >>> > >>> -Jenn > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> Gluster-users mailing list > >>> Gluster-users at gluster.org > >>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > >> > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >