Hi Rohan, Please find the comments inlined. On Wed, Jul 16, 2008 at 11:09 AM, Rohan <rohan.thale@xxxxxxxxxxxxxxxx> wrote: > Hi Avati, > > > > I know I'm being pain in the ass. But we'll be going live with this setup > soon. And as a system admin I don't want to take any chance from my side. > > > > > > > > Do you mind checking following config vol file > > > > Server > > > > ########################### > > # Volume1 > > ########################### > > > > volume posix1 > > type storage/posix # POSIX FS translator > > option directory /usr/local/mysql/glfs-data # Export this directory > > end-volume > > > > volume posix-locks-brick1 > > type features/posix-locks > > option mandatory on # enables mandatory locking on all files > > subvolumes posix1 > > end-volume > > > > volume io-threads-brick1 > > type performance/io-threads > > subvolumes posix-locks-brick1 > > option thread-count 8 > > option cache-size 512MB > > end-volume > > > > volume write-behind-brick1 > > type performance/write-behind > > option aggregate-size 4MB > > option flush-behind off > > subvolumes io-threads-brick1 > > end-volume > > > > volume brick > > type performance/read-ahead > > subvolumes write-behind-brick1 > > end-volume > > > > ########################### client > > volume client-brick > > type protocol/client > > option transport-type tcp/client > > option remote-host ACT-SM-011-R8.C6.DB.dc1.in.com > > option remote-port 6996 > > option remote-subvolume brick > > option transport-timeout 20 > > end-volume > > > > ################################################################# > > # AFR Volumes > > ################################################################# > > > > volume afr > > type cluster/afr > > subvolumes client-brick brick > > option scheduler random afr do not use schedulers. Only unify uses them to schedule nodes for file creation. You might be interested in "read-node" and "read-schedule" options of afr. > > end-volume > > > > ################################################################# > > # Server spec > > ################################################################# > > > > volume server > > type protocol/server > > option transport-type tcp/server # For TCP/IP transport > > option listen-port 6996 # Default is 6996 > > subvolumes afr brick > > option auth.ip.afr.allow * > > option auth.ip.brick.allow * > > end-volume > > > > > > client. Vol file > > ###### > > volume brick-volume > > type protocol/client > > option transport-type tcp/client > > option remote-host localhost > > option remote-port 6996 > > option remote-subvolume afr > > option transport-timeout 20 > > end-volume > > volume iothreads-bricks > > type performance/io-threads > > option thread-count 10 > > option cache-size 512MB > > subvolumes brick-volume > > end-volume > > > > volume wb-bricks > > type performance/write-behind > > option aggregate-size 512MB > > option flush-behind off > > subvolumes iothreads-bricks > > end-volume > > > > > > _____ > > From: anand.avati@xxxxxxxxx [mailto:anand.avati@xxxxxxxxx] On Behalf Of > Anand Avati > Sent: Wednesday, July 16, 2008 12:29 PM > To: Rohan > Cc: Gluster-devel@xxxxxxxxxx > Subject: Re: Help needed > > > > > > 2008-07-16 11:00:37 E [afr.c:2058:afr_open_cbk] afr: > (path=/solr/C6/tomcat2000/solr8/data/index/_p4.frq child=brick) op_ret=-1 > op_errno=2 > > 2008-07-16 11:00:37 E [afr.c:2058:afr_open_cbk] afr: > (path=/solr/C6/tomcat2000/solr8/data/index/_p4.frq child=client-brick) > op_ret=-1 op_errno=2 > > > AFR will fix those missing files (errno=2 is ENOENT - no such > file/directory) in self-heal. you can confirm by checking the file exists > on > the backend nodes. > > avati > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > -- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous