RE: about Qluster FS configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am interested to see if it is ok to build up a global clustered storage
across different IDCs with single naming space. That mean the Gluster
servers are located in different IDCs. 

Next week I organize more detailed info of test environment and send you. 

Thanks again your reply. This project is pretty good and we are happy to
continue testing on it.

-----Original Message-----
From: krishna.srinivas@xxxxxxxxx [mailto:krishna.srinivas@xxxxxxxxx] On
Behalf Of Krishna Srinivas
Sent: Friday, November 16, 2007 5:24 PM
To: Felix Chu
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: about Qluster FS configuration

Felix,

sometimes touch does not open() so a better way would be "od -N1" command.

Regarding your setup, can you give more details? how would glusterfs be
setup
across 20 data centers? what would the speed be between them?

Krishna

On Nov 16, 2007 2:38 PM, Felix Chu <felixchu@xxxxxxxxxxxxxxxxxxxx> wrote:
> Hi Krishna,
>
> Thanks your quick reply.
>
> About self heal, is that mean before the event open() triggered, the whole
> replication cluster will have one less replica than normal state? If our
> goal is to make the replication status back to normal(same #of replicas as
> normal), we need to trigger open() for all files store in the cluster file
> system, right? If so, the easiest way is to "touch *" in the clustered
mount
> point, right?
>
> By the way, we will setup a testing environment to create a GlusterFS
across
> 20 data centres, each data centre has point to point fiber in between. The
> longest distance between two data centres is about 1000km. Do you think
> GlusterFS can be applied in this kind of environment? Any minimum network
> quality between storage servers and clients?
>
> Regards,
> Felix
>
>
> -----Original Message-----
> From: krishna.zresearch@xxxxxxxxx [mailto:krishna.zresearch@xxxxxxxxx] On
> Behalf Of Krishna Srinivas
> Sent: Friday, November 16, 2007 4:19 PM
> To: Felix Chu
> Cc: gluster-devel@xxxxxxxxxx
> Subject: Re: about Qluster FS configuration
>
> On Nov 16, 2007 1:18 PM, Felix Chu <felixchu@xxxxxxxxxxxxxxxxxxxx> wrote:
> > Hi all,
> >
> >
> >
> > I am new user to this QlusterFS project. I just started the test in
local
> > environment with 3 server nodes and 2 client nodes.
> >
> >
> >
> > So far, it works fine and now I have two questions:
> >
> >
> >
> > 1.      I cannot understand the option related to "namespace" clearly. I
> > find that in most of the server conf files separated "DS" and "NS"
> volumes,
> > what is the purpose of it?
> >
>
> namespace is used :
> * to assign inode numbers
> * to readdir(), instead of reading contents of all the subvols, unify
> readdir()s just from NS.
>
> >
> >
> > e.g. in
> >
>
http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_wi
> > th_GlusterFS
> >
> >
> >
> > there are "ds" and "ns" volumes in this config
> >
> > volume mailspool-ds
> >
> >            type storage/posix
> >
> >            option directory /home/export/mailspool
> >
> >    end-volume
> >
> >
> >
> >    volume mailspool-ns
> >
> >            type storage/posix
> >
> >            option directory /home/export/mailspool-ns
> >
> >    end-volume
> >
> >
> >
> > 2.      In my testing environment, I applied the replication function to
> > replicate from one server to other 2 servers. Then I unplug one of the
> > server. On client side it still ok to access the mount point. After a
> > period, I up the unplugged server again and find that all data during
the
> > outage period does not appear on this server. Any steps required to sync
> > data back to new recovered server?
> >
>
> You need to open() that file to trigger selfheal for that file.
>
> Krishna
>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>







[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux