Switch recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello John,
Thanks for that advice.  I would like to convert the network to 
Infiniband but my budget certainly wouldn't cover 18 Infiniband NICs on 
top of the Infiniband switch.  I thought I might be able to justify 
splashing out on a fancy GigE switch if I could be sure that users would 
notice the difference.

I like the idea of setting up a LAG between the existing Dell 5424 
switches but I don't have enough spare ports at the moment.  However I 
will bear that in mind for the future.

-Dan.

On 01/27/2012 01:38 PM, John Lauro wrote:
> If you are considering as much as ?3500 for a switch, you might want to
> consider infiniband qdr instead. We don't currently have it here, but are
> considering it.  From what I can it has lower latency, can do 40gbps, is
> reasonably priced (slightly better than 10gbe, not comaring directly to
> gb) .  That said, you would also have to budget for cards and cables even
> though the switch price by itself is not so bad...
>
> As you have multiple switches (sounds like 5524 or maybe 5424), the first
> next to 0 cost change you should do (if not already done) is setup a LAG
> between the switches and run 2-4 cables between switches instead of 1.
>
>
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org [mailto:gluster-users-
>> bounces at gluster.org] On Behalf Of Dan Bretherton
>> Sent: Friday, January 27, 2012 8:05 AM
>> To: gluster-users
>> Subject: Switch recommendations
>>
>> Dear All,
>> I need to buy a bigger GigE switch for my GlusterFS cluster and I am
>> trying to decide whether or not a much more expensive one would be
>> justified.  I have limited experience with networking so I don't know if
>> it would be appropriate to spend ?500, ?1500 or ?3500 for a 48-port
>> switch.  Those rough costs are based on a comparison of 3 Dell
>> Powerconnect switches: the 5548 (bigger version of what we have now),
>> the 6248 and the 7048.  The servers in the cluster are nothing special -
>> mostly Supermicro with SATA drives and 1GigE network adapters.  I can
>> only justify spending more than ~?500 if I can be sure that users would
>> notice the difference.  Some of the users' applications do lots of small
>> reads and writes, and they do run much more slowly if all the servers
>> are not connected to the same switch, as is the case now while I don't
>> have a big enough switch.  Any advice or comments would be much
>> appreciated.
>>
>> Regards
>> Dan.
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux