Re: Multi-head NFS/Gluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It gets even worse
I like round robin mode and actually some intelligent enterprise grade switches can handle it well but they all have conditions.
I'm going to use Avaya as an example because I've diagnosed one of the issues with an Avaya switch recently.

It is supported with Simple MLT (multi link trunk or ether channel to Cisco) but when you do somethin fancy like an SLT (split link trunk) which is the same thing but split across two switchs fragments the frames and causes massive issues. In addition it doesn't work properly with UDP or multicast traffic because the switch can't figure out how to reassemble the packets in the right order. In addition it increases the CPU utilization on the switch. Finally its only on the sending policy on the host not the switch. I have heard you can get a special firmware from Avaya to support as a sending policy on the switch but Avaya doesn't like to give it out because it causes too many issues that most of their customers don't fully appreciate before trying to use it.

Now something I can advise is to use a load balancer in front with a VIP to further distribute your NFS clients across the nodes.



-- Sent from my HP Pre3


On Jan 12, 2014 18:02, Dan Mons <dmons@xxxxxxxxxxxxxxxxxx> wrote:

And now I've read the second part of your post where you want higher
bandwidth. Linux bonding mode 0 (balance-rr) is able to give you
aggregate bandwidth to the total sum of your interfaces (i.e.: bonding
over 4x 1GbE interfaces will give you 4gbit/s).

There are some caveats with certain switches, however. The "smarter"
the switch, often the more problems you'll have (managed switches
sometimes don't like seeing MAC addresses moving quickly between
physical ports). If that's the case, your switch should support some
sort of IEEE 802.1ax or IEEE 802.3ad features, in which cache if you
hash on receiving MAC (instead of receiving IP), you'll get the same
aggregate bandwidth result. You can use bonding mode 4 on the Linux
nodes to then allow the switch to set these modes.

If you're using unmanaged switches however, then bonding mode 0
(balance-rr) is what you're after.

-Dan

----------------
Dan Mons
R&D SysAdmin
Unbreaker of broken things
Cutting Edge
http://cuttingedge.com.au


On 13 January 2014 08:52, Dan Mons <dmons@xxxxxxxxxxxxxxxxxx> wrote:
> I've configured all of my Gluster nodes to use standard Linux Ethernet bonding.
>
> We've got Myricom PCI-E 10GBE cards with two NICs per card, bonded
> with the balance-xor (mode 2) option. That restricts any individual
> client to 10gbit/s, but can give a global 20gbit/s in and out of each
> node when multiple clients hit them.
>
> Gluster then only cares about one logical interface, and NFS
> configuration is simple.
>
> -Dan
> ----------------
> Dan Mons
> R&D SysAdmin
> Unbreaker of broken things
> Cutting Edge
> http://cuttingedge.com.au
>
>
> On 13 January 2014 04:03, Nux! <nux@xxxxxxxxx> wrote:
>> Hi,
>>
>> Has anyone tried to do multi-head NFS with a gluster setup? I'm thinking of
>> something similar to https://fedorahosted.org/cluster/wiki/MultiHeadNFS
>> What I'm trying to achieve is more throughput than a bonded gigabit link can
>> give me. I want soemthing like "multipath" for NFS.
>>
>> Thoughts? Any gotchas?
>>
>> Lucian
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux