Network redundancy pro and cons, best practice, suggestions?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear ceph users,

we are planing a ceph storage cluster from scratch. Might be up to 1 PB
within the next 3 years, multiple buildings, new network infrastructure
for the cluster etc.

I had some excellent trainings on ceph, so the essential fundamentals
are familiar to me, and I know our goals/dreams can be reached. :)

There is just "one tiny piece" in the design I'm currently unsure about :)

Ceph follows some sort of keep it small and simple, e.g. dont use raid
controllers, use more boxes and disks, fast network etc.

So from our current design we plan 40Gb Storage and Client LAN.

Would you suggest to connect the OSD nodes redundant to both networks?
That would end up with 4 * 40Gb ports in each box, two Switches to
connect to.

I'd think of OSD nodes with 12 - 16 * 4TB SATA disks for "high" io
pools. (+ currently SSD for journal, but may be until we start, levelDB,
rocksDB are ready ... ?)

Later some less io bound pools for data archiving/backup. (bigger and
more Disks per node)

We would also do some Cache tiering for some pools.

From HP, Intel, Supermicron etc reference documentations, they use
usually non-redundant network connection. (single 10Gb)

I know: redundancy keeps some headaches small, but also adds some more
complexity and increases the budget. (add network adapters, other
server, more switches, etc)

So what would you suggest, what are your experiences?

	Thanks for any suggestion and feedback . Regards . Götz
-- 
Götz Reinicke
IT-Koordinator

Tel. +49 7141 969 82 420
E-Mail goetz.reinicke@xxxxxxxxxxxxxxx

Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de

Eintragung Amtsgericht Stuttgart HRB 205016

Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
Staatssekretär im Ministerium für Wissenschaft,
Forschung und Kunst Baden-Württemberg

Geschäftsführer: Prof. Thomas Schadt

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux