Re: cluster/stripe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alexey,

some more questions.
1. Are the bricks exported by the server are empty? i.e., are you trying to
use stripe on already _existing_ files ? Note that stripe works only on
those files which are created through it.
2. Is the size of the file being stripped is greater than the block size
configured in the server spec file? But in any case atleast file should be
created on both server bricks.

regards,
On 9/26/07, Alexey Filin <alexey.filin@xxxxxxxxx> wrote:
>
> Hi Raghavendra,
>
> I found and fixed error in stripe.c (like Matthias proposed already) and
> nothing has changed, so I simplified configs to:
> --------------------------------------------------------------
> servers:
>
> # Namespace posix
> volume brick-ns
>   type storage/posix
>   option directory /data/export-ns
> end-volume
>
> volume brick
>   type storage/posix
>   option directory /data/export
> end-volume
>
> ### Trace storage/posix translator.
> volume trace
>   type debug/trace
>   subvolumes brick
>   option debug on
> end-volume
>
> volume server
>  type protocol/server
>  subvolumes brick brick-ns
>  option transport-type tcp/server
> # option bind-address 172.30.2.       # Default is to listen on all
> interfaces
>  option listen-port 6996                # Default is 6996
> # option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>  option auth.ip.brick.allow 172.30.2.*
>  option auth.ip.brick-ns.allow 172.30.2.*
> end-volume
> --------------------------------------------------------------
> client:
>
> volume client1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 172.30.2.1
>  option remote-subvolume brick
> end-volume
>
> volume client2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 172.30.2.2
>  option remote-subvolume brick
> end-volume
>
> volume stripe1
>  type cluster/stripe
>  subvolumes client1 client2
> # option block-size *:10MB
>  option block-size *:1MB
> end-volume
>
> ### Trace storage/posix translator.
> volume trace
>   type debug/trace
>   subvolumes stripe1
>   option debug on
> end-volume
> --------------------------------------------------------------
>
> added debug print to stripe.c:stripe_open()
>
> --------------------------------------------------------------
>   striped = data_to_int8 (dict_get (loc->inode->ctx, this->name));
>   local->striped = striped;
>
> +gf_log (this->name,
> +    GF_LOG_WARNING,
> +    "MY: stripe_open: local->stripe_size=%i local->striped=%i
> this->name=(%s)",
> +    (int)local->stripe_size, (int)local->striped, this->name);
>
>   if (striped == 1) {
>     local->call_count = 1;
> --------------------------------------------------------------
>
> got in client log file something like:
>
> 2007-09-26 00:57:25 W [stripe.c:1804:stripe_open] stripe1: MY:
> stripe_open: local->stripe_size=0 local->striped=1 this->name=(stripe1)
>
> file is created on one node only, changed the condition above to:
>
> --------------------------------------------------------------
> /*  if (striped == 1) { */
>   if (!striped) {
>     local->call_count = 1;
> --------------------------------------------------------------
>
> because the condition works wrong (?):
>
>   } else {
>     /* Striped files */
>
> got the same, file is created on one node only :(
> set/getfattr -n trusted.stripe1.stripe-size manually works fine
>
> regards, Alexey
>
> On 9/25/07, Raghavendra G <raghavendra.hg@xxxxxxxxx> wrote:
> >
> > Hi Alexey,
> >
> > Can you please try with glusterfs--mainline--2.5--patch-493 and check
> > whether the bug still persists? Also If the bug is not fixed can you please
> > send glusterfs server and client configuration files?
> >
> > regards,
> >
> > On 9/25/07, Alexey Filin < alexey.filin@xxxxxxxxx> wrote:
> >
> > > Hi,
> > >
> > > gluster@xxxxxxxxxx/glusterfs--mainline--2.5--patch-485
> > > fuse-2.7.0-glfs3
> > > Linux 2.6.9-55.0.2.EL.cern (Scientific Linux CERN SLC release
> > > 4.5(Beryllium)), i386
> > >
> > > 4 HPC cluster work nodes, each node has two Gigabit interfaces for two
> > > LANs
> > > (Data Acquisition System LAN and SAN).
> > >
> > > server.vol and client.vol were made with example in
> > > http://gluster.org/docs/index.php/GlusterFS_Translators but with alu
> > > scheduler:
> > >
> > > brick->posix-locks->io-thr->wb->ra->server
> > > ((client1+client2)->stripe1)+((client3+client4)->stripe2)->afr->unify->iot->wb->ra->ioc
> > >
> > >
> > > unify is supposed to connect another 4 nodes after tests
> > >
> > > copying from local FS to GlusterFS and inversely on client1 works
> > > fine,
> > > performance nearly native (as for local to local)
> > > back-end FS ext3 (get/setfattr don't work)=> afr works fine, stripe
> > > doesn't
> > > work at all
> > > back-end FS xfs (get/setfattr work fine) => afr works fine, stripe
> > > doesn't
> > > work at all
> > >
> > > changed client.vol to: (client1+client2)->stripe1->iot->wb->ra->ioc =>
> > > stripe doesn't work
> > >
> > > log files don't contain anything interesting,
> > > 1) How to make cluster/stripe work?
> > >
> > > http://gluster.org/docs/index.php/GlusterFS_FAQ
> > > "...if one uses 'cluster/afr' translator with 'cluster/stripe' then
> > > GlusterFS can provide high availability."
> > > 2) Is HA provided for stripe+afr only or for afr alone too?
> > >
> > > I suppose to use cluster work nodes with local hard disks as a
> > > distributed
> > > on-line and off-line storage for raw data acquired on our experimental
> > > setup
> > > (tape back-up is provided of course). It's supposed to keep 10-20
> > > terabytes
> > > of raw data in total (the cluster is supposed to upgrade in future).
> > > 3) Has it sense to use cluster/stripe (HA is very desirable) in the
> > > case?
> > >
> > > Thanks for answers in advance.
> > >
> > > Alexey Filin.
> > > Experiment OKA, Institute for High Energy Physics, Protvino, Russia
> > >
> > > PS Also I tortured GlusterFS with direct file manipulation (through
> > > back-end
> > > FS), results are good for me
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > Gluster-devel@xxxxxxxxxx
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> > Raghavendra G
>
>
>


-- 
Raghavendra G


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux