Re: 6 Node cluster with 24 SSD per node: Hardware planning / agreement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

With Xeon E3 1245's (3.6Ghz with all 4 cores Turbo'd) and P3700
Journal with 10GB networking I have managed to get it down to around
600-700us. Make sure you force P-States and C-states as without I was
only getting about 2ms.

I've written it in our buy/change list :-)

Ah ok, fair do's. Are the hypervisors connected via 10G as well, or
will they be 1G? You want 10G end to end to get the lowest
latency.

biggest problem ... written also on my last mail .. in short .. to less 10Gb ports on our HP switches, bought without asking the right persons. ... Second: For security reason we want to split productive / storage, if possible. So we are unsure, if it is O.K for know to use only 2 x 1Gb/s via LACP and 1 x 10Gb per node for interconnection ... Maybe a SN2100 and breakout cables with 10 x CX4 cards and using 10Gb FastEthernet only for outgoing VM traffic (so only 4 x 2 10Gb/s (LACP) on the HP switches are used, which would be perfect).


[...]
journal consumer based SSD's otherwise they will likely have a very
short life. The 400GB P3700 will give you ~1000MB/s, which will match
your 10G network so might be ok. Is that ok with you? Or would
[...]

for our first start with Ceph, it should O.K for us.

For best low latency performance, I would personally recommend scaling
out with more nodes using high clocked single socket Xeon E3
or Xeon E5 16xx rather than go with big boxes with high core count CPU's.

Rackspace and power is one of our limiting boarders ... so I like 2HE chassis. SSDs are faster and needs less power than spinning drives. But a chassis with 12 SSDs .... so I've chosen a 24 drive slot chassis which gives us the most flexibility.

I used this board for my latest cluster, has a lot of stuff on board
to save buying addon cards.

https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-CTF.cfm

thanks .. looking also for boards with DDR4 ...

cu denny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux