Re: Network hardware recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 06 Oct 2014 10:19:28 +0700 Ariel Silooy wrote:

> Hello fellow ceph user, right now we are researching ceph for our
> storage.
> 
> We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for 
> now we are using the NFS proxy setup. On each OSD node we have 4x 1G 
> Intel copper NIC (not sure about the model number though but I'll look 
> it up in case anyone asking). Up until now we are testing on one nic as 
> we dont have (yet) a network switch with la/teaming support.
> 
> I suppose since its Intel we should try to get jumbo frames working too, 
> so I hope someone would recommend a good switch that is known to work 
> with most Intel's.
>
Any decent switch with LACP will do really. 
And with that I mean Cisco, Brocade etc.

But that won't give you redundancy if a switch fails, see below.

> We are looking for recommendation on what kind of network switch, 
> network layout, brand, model, whatever.. as we are (kind of) new to 
> building our own storage and has no experience in ceph.
>
 
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches. 
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed) if one goes down.
They are probably too pricey in a 1GB/s environment though, but that's for
you to investigate and decide.

Otherwise you'd wind up with something like 2 normal switches and half
your possible speed as one link is always just standby. 

Segregation of client and replication traffic (public/cluster network)
probably won't make much sense, as any decent switch will be able to
handle the bandwidth of all ports and with a combined network (2
active links) you get the potential benefit of higher read speeds for
clients.

> We are also looking for feasibility of using fibre-channel instead of 
> copper but we dont know if it would help much, in terms of 
> speed-improvements/$ ratio since we already have 4 NICs on each OSD. 
> Should we go for it?
>
Why would you? 
For starters I think you mean fiber-optics, as fiber-channel is something
else. ^o^
Those make only sense when you're going longer distances than your cluster
size suggests. 

If you're looking for something that is both faster and less expensive
than 10GB/s Ethernet, investigate Infiniband. 

Christian

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux