Re: Disk Density Considerations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/06/2013 09:36 AM, Dimitri Maziuk wrote:
On 2013-11-06 08:37, Mark Nelson wrote:
...
Taking this even further, options like the hadoop fat twin nodes with 12
drives in 1U potentially could be even denser, while spreading the
drives out over even more nodes.  Now instead of 4-5 large dense nodes
you have maybe 35-40 small dense nodes.  The downside here though is
that the cost may be a bit higher and you have to slide out a whole node
to swap drives, though Ceph is more tolerant of this than many
distributed systems.

Another one is 35-40 switch ports vs 4-5. I hear "regular" 10G ports eat
up over 10 watts of juice and cat6e cable offers a unique combination of
poor design and high cost. It's probably ok to need 35-40 routable ip
addresses: you can add another interface & subnet to your public-facing
clients.

I figure it's about tradeoffs. A single 10GbE link for 90 OSDs is pretty oversubscribed. You'll probably be doing at least dual 10GbE (one for front and one for back), and for such heavy systems you may want redundant network links to reduce the chance of failure, as one of those nodes going down is going to have a huge impact on the cluster while it's down.

With 35-40 smaller nodes you might do single or dual 10GbE for each node if you are shooting for high performance, but if cost is the motivating factor you could potentially do a pair of 2 way bonded 1GbE links. Having redundant links is less important because the impact of a node failure is far less.

As for Cat6 vs SFP+, I tend to favor SFP+ with twinax cables. The cables are more expensive up front, but the cards tend to be a bit cheaper and the per-port power consumption is low. I've heard the newest generation of Cat6 products have improved dramatically though, so maybe it's a harder decision now.


Dima

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux