Re: AFR between two bricks over 3000 miles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nathan,
We are working on a solution, it will take time. Will let you know
after it is done, if it is not done with in 2 weeks please follow it
up with us.
Regards
Krishna

On Mon, Mar 10, 2008 at 5:19 AM,  <nathan@xxxxxxxxxxxx> wrote:
>
>  Any update on this? Would love to be able to use gluster between east and
>  west coast sites.
>
>
>
>  ><>
>  Nathan Stratton
>  nathan at robotics.net
>  http://www.robotics.net
>
>
>
> On Tue, 4 Mar 2008, Anand Avati wrote:
>
>  > 2008/3/4, nathan@xxxxxxxxxxxx <nathan@xxxxxxxxxxxx>:
>  >>
>  >>
>  >> I tried moving this to server side and that did not help. The 74 ms delay
>  >> between the sites appears to make gluster unusable, even tho there is 100
>  >> meg of free capacity. Any plans to background some of the back and forth
>  >> communication so that if you use write-behind you can get close to local
>  >> speed on writes?
>  >
>  >
>  > currently write-behind works for only write() calls. I too am seeing the
>  > issue of the highest latency subvolume becoming the bottleneck for
>  > operations while it is not unfair to expect application performance at the
>  > fastest subvolume speed. Will discuss with Krishna about the impact and side
>  > effects and let you know what we come up with.
>  >
>  > avati
>  >
>  >
>  >
>  >> <>
>  >> Nathan Stratton
>  >> nathan at robotics.net
>  >> http://www.robotics.net
>  >>
>  >>
>  >> On Mon, 3 Mar 2008, nathan@xxxxxxxxxxxx wrote:
>  >>
>  >>> On Mon, 3 Mar 2008, Anand Avati wrote:
>  >>>
>  >>>> the bottleneck seems to be create/open which is synchronous over all
>  >>>> subvolumes. do you have numbers without involving the remote site/afr ?
>  >>>
>  >>> Just mounted nyc rather then mirror and that cut the time to .076 sec vs
>  >>> local .01 sec.
>  >>>
>  >>> This is a 100 meg link between NYC and SJC and only transferring 748K.
>  >> My
>  >>> guess is even if we had a full gig link between the two we would have
>  >> the
>  >>> same problem. A message sent from NYC to SJC will still take 60ms to fly
>  >>> across the country. So if you have a lot of messages flying back and
>  >> forth
>  >>> waiting for data to be written you will have this problem.
>  >>>
>  >>> Another clue is that it takes 27 sec just to rm -fr the files! If I
>  >> moved afr
>  >>> from the client to the server would that help?
>  >>>
>  >>> [root@xen0 glusterfs]# time cp -r /etc/sysconfig/ /share/mirror
>  >>>
>  >>> real    0m0.076s
>  >>> user    0m0.000s
>  >>> sys     0m0.007s
>  >>> [root@xen0 glusterfs]# time cp -r /etc/sysconfig/ /share/
>  >>>
>  >>> real    0m0.010s
>  >>> user    0m0.001s
>  >>> sys     0m0.006s
>  >>> [root@xen0 glusterfs]#
>  >>>
>  >>> With mirror mounted (sjc and nyc up):
>  >>>
>  >>> [root@xen0 glusterfs]# time cp -r /etc/sysconfig/ /share/mirror
>  >>>
>  >>> real    0m52.195s
>  >>> user    0m0.000s
>  >>> sys     0m0.001s
>  >>>
>  >>> [root@xen0 glusterfs]# time rm -fr /share/mirror/*
>  >>>
>  >>> real    0m27.030s
>  >>> user    0m0.000s
>  >>> sys     0m0.002s
>  >>>
>  >>>> <>
>  >>> Nathan Stratton
>  >>> nathan at robotics.net
>  >>> http://www.robotics.net
>  >>>
>  >>
>  >
>  >
>  >
>  > --
>  > If I traveled to the end of the rainbow
>  > As Dame Fortune did intend,
>  > Murphy would be there to tell me
>  > The pot's at the other end.
>  >
>
>
>  _______________________________________________
>  Gluster-devel mailing list
>  Gluster-devel@xxxxxxxxxx
>  http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux