Hi Alexey, Now I doubt the support of 'fnmatch()' in your library. The stripe_size should be '1048576' for any file. And it worked for us in all the cases we tested. Can you apply another patch and see if it works? In function 'stripe_get_matching_bs ()' can you add 'return 1048576;' at first line and see if it works fine? This is to corner the problem to 'fnmatch()'. -amar PS: also try printing 64bit number (%lld) itself rather than typecasting it while printing it. > added debug print to stripe.c:stripe_open() > > -------------------------------------------------------------- > striped = data_to_int8 (dict_get (loc->inode->ctx, this->name)); > local->striped = striped; > > +gf_log (this->name, > + GF_LOG_WARNING, > + "MY: stripe_open: local->stripe_size=%i local->striped=%i > this->name=(%s)", > + (int)local->stripe_size, (int)local->striped, this->name); > > if (striped == 1) { > local->call_count = 1; > -------------------------------------------------------------- > > got in client log file something like: > > 2007-09-26 00:57:25 W [stripe.c:1804:stripe_open] stripe1: MY: > stripe_open: > local->stripe_size=0 local->striped=1 this->name=(stripe1) > > file is created on one node only, changed the condition above to: > > -------------------------------------------------------------- > /* if (striped == 1) { */ > if (!striped) { > local->call_count = 1; > -------------------------------------------------------------- > > because the condition works wrong (?): > > } else { > /* Striped files */ > > got the same, file is created on one node only :( > set/getfattr -n trusted.stripe1.stripe-size manually works fine > > regards, Alexey > > On 9/25/07, Raghavendra G <raghavendra.hg@xxxxxxxxx> wrote: > > > > Hi Alexey, > > > > Can you please try with glusterfs--mainline--2.5--patch-493 and check > > whether the bug still persists? Also If the bug is not fixed can you > please > > send glusterfs server and client configuration files? > > > > regards, > > > > On 9/25/07, Alexey Filin < alexey.filin@xxxxxxxxx> wrote: > > > > > Hi, > > > > > > gluster@xxxxxxxxxx/glusterfs--mainline--2.5--patch-485 > > > fuse-2.7.0-glfs3 > > > Linux 2.6.9-55.0.2.EL.cern (Scientific Linux CERN SLC release > > > 4.5(Beryllium)), i386 > > > > > > 4 HPC cluster work nodes, each node has two Gigabit interfaces for two > > > LANs > > > (Data Acquisition System LAN and SAN). > > > > > > server.vol and client.vol were made with example in > > > http://gluster.org/docs/index.php/GlusterFS_Translators but with alu > > > scheduler: > > > > > > brick->posix-locks->io-thr->wb->ra->server > > > > ((client1+client2)->stripe1)+((client3+client4)->stripe2)->afr->unify->iot->wb->ra->ioc > > > > > > > > > unify is supposed to connect another 4 nodes after tests > > > > > > copying from local FS to GlusterFS and inversely on client1 works > fine, > > > performance nearly native (as for local to local) > > > back-end FS ext3 (get/setfattr don't work)=> afr works fine, stripe > > > doesn't > > > work at all > > > back-end FS xfs (get/setfattr work fine) => afr works fine, stripe > > > doesn't > > > work at all > > > > > > changed client.vol to: (client1+client2)->stripe1->iot->wb->ra->ioc => > > > stripe doesn't work > > > > > > log files don't contain anything interesting, > > > 1) How to make cluster/stripe work? > > > > > > http://gluster.org/docs/index.php/GlusterFS_FAQ > > > "...if one uses 'cluster/afr' translator with 'cluster/stripe' then > > > GlusterFS can provide high availability." > > > 2) Is HA provided for stripe+afr only or for afr alone too? > > > > > > I suppose to use cluster work nodes with local hard disks as a > > > distributed > > > on-line and off-line storage for raw data acquired on our experimental > > > setup > > > (tape back-up is provided of course). It's supposed to keep 10-20 > > > terabytes > > > of raw data in total (the cluster is supposed to upgrade in future). > > > 3) Has it sense to use cluster/stripe (HA is very desirable) in the > > > case? > > > > > > Thanks for answers in advance. > > > > > > Alexey Filin. > > > Experiment OKA, Institute for High Energy Physics, Protvino, Russia > > > > > > PS Also I tortured GlusterFS with direct file manipulation (through > > > back-end > > > FS), results are good for me > > > _______________________________________________ > > > Gluster-devel mailing list > > > Gluster-devel@xxxxxxxxxx > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > > > > > > > > -- > > Raghavendra G > > > > > -- > Raghavendra G > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > -- Amar Tumballi Gluster/GlusterFS Hacker [bulde on #gluster/irc.gnu.org] http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!