If you mentioned replica of 2 then I believe there will be only 2 writes not 4. My understanding is that for replica 2 if a file is created on brick 1 then it will be replicated to brick 2. This makes for replica 2. Have you seen it otherwise? On Fri, Mar 11, 2011 at 1:40 AM, anthony garnier <sokar6012 at hotmail.com> wrote: > Hi, > The GSLB+RR is especially usefull for nfs client in fact, for gluster > client, it's just for volfile. > The process to remove node entry from DNS is indeed manual, we are looking > for a way to do it automaticaly, maybe with script.... > What do you mean by "How do you ensure that a copy of file in one site > definitely is saved on other site as well?" > Servers from replica1 and 2 are mixed between? the 2 datacenter, > Replica pool 1 : Brick 1,2,3,4 > Replica pool 2 : Brick 5,6,7,8 > > Datacenter 1 : Brick 1,2,5,6 > Datacenter 2 : Brick 3,4,7,8 > > In this way, each Datacenter got 2 replica of the file. Each Datacenter > could be independent if there is a Wan interruption. > > Regards, > Anthony > >> Date: Thu, 10 Mar 2011 12:00:53 -0800 >> Subject: Re: How to use gluster for WAN/Data Center replication >> From: mohitanchlia at gmail.com >> To: sokar6012 at hotmail.com; gluster-users at gluster.org >> >> Thanks for the info! I am assuming it's a manual process to remove >> nodes from the DNS? >> >> If I am not wrong I think load balancing by default occurs for native >> gfs client that you are using. Initial mount is required only to read >> volfile. >> >> How do you ensure that a copy of file in one site definitely is saved >> on other site as well? >> >> On Thu, Mar 10, 2011 at 1:11 AM, anthony garnier <sokar6012 at hotmail.com> >> wrote: >> > Hi, >> > I have done a setup(see my setup below) on multi site datacenter with >> > gluster and currently it doesn't work properly but there is some >> > workaround. >> > The main problem is that replication is synchronous and there is >> > currently >> > no way to turn it in async mod. I've done some test >> > (iozone,tar,bonnie++,script...) and performance >> > is poor with small files especially. We are using an url to access >> > servers : >> > glusterfs.cluster.inetcompany.com >> > This url is in DNS GSLB(geo DNS)+RR (Round Robin) >> > It means that client from datacenter 1 will always be binded randomly on >> > storage node from his Datacenter. >> > They use this command for mounting the filesystem : >> > mount -t glusterfs glusterfs.cluster.inetcompany.com:/venus >> > /users/glusterfs_mnt >> > >> > If one node fails , it is remove from de list of the DNS, client do a >> > new >> > DNS query and he is binded on active node of his Datacenter. >> > You could use Wan accelerator also. >> > >> > We currently are in intra site mode and we are waiting for Async >> > replication >> > feature expected in version 3.2. It should come soon. >> > >> > >> > Volume Name: venus >> > Type: Distributed-Replicate >> > Status: Started >> > Number of Bricks: 2 x 4 = 8 >> > Transport-type: tcp >> > Bricks: >> > Brick1: serv1:/users/exp1 \ >> > Brick2: serv2:/users/exp2 > R?plica pool 1 \ >> > Brick3: serv3:/users/exp3 / \ >> > Brick4: serv4:/users/exp4 =Envoyer>Distribution >> > Brick5: serv5:/users/exp5 \ / >> > Brick6: serv6:/users/exp6 > R?plica pool 2 / >> > Brick7: serv7:/users/exp7 / >> > Brick8: serv8:/users/exp8 >> > >> > Datacenter 1 : Brick 1,2,5,6 >> > Datacenter 2 : Brick 3,4,7,8 >> > Distance between Datacenters : 500km >> > Latency between Datacenters : 11ms >> > Datarate between Datacenters : ~100Mb/s >> > >> > >> > >> > Regards, >> > Anthony >> > >> > >> > >> >>Message: 3 >> >>Date: Wed, 9 Mar 2011 16:44:27 -0800 >> >>From: Mohit Anchlia <mohitanchlia at gmail.com> >> >>Subject: How to use gluster for WAN/Data Center >> >> replication >> >>To: gluster-users at gluster.org >> >>Message-ID: >> >> <AANLkTi=dkK=zX0QdCfnKeLJ5nkF1dF3+g1hxDzFZNvwx at mail.gmail.com> >> >>Content-Type: text/plain; charset=ISO-8859-1 >> >> >> >>How to setup gluster for WAN/Data Center replication? Are there others >> >>using it this way? >> >> >> >>Also, how to make the writes asynchronuous for data center replication? >> >> >> >>We have a requirement to replicate data to other data center as well. >