Re: 6 Node cluster with 24 SSD per node: Hardwareplanning/ agreement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 05 Oct 2016 13:43:27 +0200 Denny Fuchs wrote:

> hi,
> 
> I get a call from Mellanox and we get now a offer for the following 
> network:
> 
> * 2 x SN2100 100Gb/s Switch 16 ports
Which incidentally is a half sized (identical HW really) Arctica 3200C.

> * 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
> * 4 x Adapter from Mellanox QSA to SFP+ port for interconnecting to our 
> HP 2920 switches
> * 3 x Copper split cables 1 x 100Gb -> 4 x 25Gb
> 

You haven't commented on my rather lengthy mail about your whole design,
so to reiterate:

The above will give you a beautiful, fast (but I doubt you'll need the
bandwidth for your DB transactions), low latency and redundant network
(these switches do/should support MC-LAG). 

But unless you make significant changes to the rest of your design, that's
akin to putting a formula 1 engine into a tractor, with the gear limiting
your top speed (the journal NVMe) and the wheels likely to fall off (your
consumer SSDs).

In more technical terms, your network as depicted above can handle under
normal circumstances around 5GB/s, while your OSD nodes can't write more
than 1GB/s.
Massive, wasteful overkill.

With a 2nd NVMe in there you'd be at 2GB/s, or simple overkill.

With decent SSDs and in-line journals (400GB DC S3610s) you'd be at 4.8
GB/s, a perfect match.

Of course if your I/O bandwidth needs are actually below 1GB/s at all times
and all your care about is reducing latency, a single NVMe journal will be
fine (but also be a very obvious SPoF).

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux