Separate replication/self-healing and access traffic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

@Joop:
thanks for your info! :)
To be honest I planned the same here, as iptables will work, but also brings some extra CPU load and network latency with it.

@all:
I read in the documentation, the Gluster server pool automatically tells  the GlusterFS client which storage server it should use,
So even if I just tell the client one server, the client will know the complete pool with all peers, right?

Is there any layer in the Gluster stack which balances the client requests over all GlusterFS servers or does it make sense to use DNS RR here?
Thanks in advance for your info!

Best,
Sven.


Sven Knohsalla | System Administration | Netbiscuits

Office +49 631 68036 433 | Fax +49 631 68036 111  |E-Mail s.knohsalla at netbiscuits.com | Skype: netbiscuits.admin
Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

Von: Joop [mailto:jvdwege at xs4all.nl]
Gesendet: Sonntag, 9. Juni 2013 13:33
An: Sven Knohsalla
Cc: gluster-users at gluster.org
Betreff: Re: Separate replication/self-healing and access traffic

Hi Sven,




just read the official documentation and googled for separation of storage replication /self-healing networking & storage-access network via NFS/GlusterFS-client,
but I couldn't find a clear answer.

Is it possible to separate replication & access network with the options


option transport.socket.bind-address

option auth.addr.brick.allow

?

If so, is it possible to run native glusterFS or do I have to use NFS for storage access and  first option to allow server-side replication?

Is there any issues I may run into or I didn't pay attention at this point?


I have implemented a kind of split dns solution whereby the managment layer resolves to a different network then the storage interface. I'm using oVirt with gluster in this way and thing work rather smoothly. Solution of Jeff  works too but as he says in his blog, using iptables is less transparant and using splitdns had the advantage that is will work or not if you forget to add a node to the storage dns zone.
oVirt engine sees stor_srv01 as 192.168.1.1 but the storage layer sees it as 10.1.1.1 for example.

Still got question? go ahead.

Regards,

Joop
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130610/f9b47e67/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux