mixing tcp/ip and ib/rdma in distributed replicated volume for disaster recovery.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we would like to build a gluster storage systems that combines our
need for performance with our need for disaster recovery. I saw a
couple of posts indicating that this is possible
(http://gluster.org/pipermail/gluster-users/2010-February/003862.html)
but am not 100% clear if that is possible

Let's assume I have a total of 6 storage servers and bricks and want
to spread them across 2 buildings for DR. server1-3 are in building1
server 4-6 are in building2. I create a distributed replicated volume
and make sure that server1 replicates to server4, server2 to server5
and server3 to server6. I have this running today and it's working
reasonably well. Now I have a couple of high performance compute
systems in the server room of building 1 are currently using glusterfs
to talk to the gluster storage cluster. Throughput is great, latency
not so. I believe I have 2 options:

1. connecting server1,2,3 to my new IB switch (with rdma) and keep my
compute servers connected via a tcp mount point /mnt/gluster. Latency
will be reduced during normal operations. Gluster will figure out to
use server1,2,3 because they can serve the files much quicker and will
leave server4-6 for replication. If a server2 dies server4 will
automatically take over serving files. Performance is reduced for many
files but the end user does not experience and outage. Is this
correct?

2. connecting server1,2,3 AND my compute systems to my new IB switch
and mount storage via an rdma mount point /mnt/gluster-fast.  End
users will experience even lower latency. If server2 dies users will
continue to be able to access files that were residing on the other 2
servers, however all files that were on server2 become unavailable and
long running compute jobs might die. The end user needs to switch to
tcp mount point /mnt/gluster to be able to access the files on server2
and other servers. Is this correct?


Are my assumptions correct?

Should I use 2 mount points to /mnt/gluster and /mnt/gluster-fast to
give users the choice between very reliable and very performant?

Thanks
dipe


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux