Hi Alexey, Can you please try with glusterfs--mainline--2.5--patch-493 and check whether the bug still persists? Also If the bug is not fixed can you please send glusterfs server and client configuration files? regards, On 9/25/07, Alexey Filin <alexey.filin@xxxxxxxxx> wrote: > > Hi, > > gluster@xxxxxxxxxx/glusterfs--mainline--2.5--patch-485 > fuse-2.7.0-glfs3 > Linux 2.6.9-55.0.2.EL.cern (Scientific Linux CERN SLC release > 4.5(Beryllium)), i386 > > 4 HPC cluster work nodes, each node has two Gigabit interfaces for two > LANs > (Data Acquisition System LAN and SAN). > > server.vol and client.vol were made with example in > http://gluster.org/docs/index.php/GlusterFS_Translators but with alu > scheduler: > > brick->posix-locks->io-thr->wb->ra->server > > ((client1+client2)->stripe1)+((client3+client4)->stripe2)->afr->unify->iot->wb->ra->ioc > > unify is supposed to connect another 4 nodes after tests > > copying from local FS to GlusterFS and inversely on client1 works fine, > performance nearly native (as for local to local) > back-end FS ext3 (get/setfattr don't work)=> afr works fine, stripe > doesn't > work at all > back-end FS xfs (get/setfattr work fine) => afr works fine, stripe doesn't > work at all > > changed client.vol to: (client1+client2)->stripe1->iot->wb->ra->ioc => > stripe doesn't work > > log files don't contain anything interesting, > 1) How to make cluster/stripe work? > > http://gluster.org/docs/index.php/GlusterFS_FAQ > "...if one uses 'cluster/afr' translator with 'cluster/stripe' then > GlusterFS can provide high availability." > 2) Is HA provided for stripe+afr only or for afr alone too? > > I suppose to use cluster work nodes with local hard disks as a distributed > on-line and off-line storage for raw data acquired on our experimental > setup > (tape back-up is provided of course). It's supposed to keep 10-20 > terabytes > of raw data in total (the cluster is supposed to upgrade in future). > 3) Has it sense to use cluster/stripe (HA is very desirable) in the case? > > Thanks for answers in advance. > > Alexey Filin. > Experiment OKA, Institute for High Energy Physics, Protvino, Russia > > PS Also I tortured GlusterFS with direct file manipulation (through > back-end > FS), results are good for me > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > -- Raghavendra G