Martin,
Thank you very much for sharing your insight on hardware options. This will be very useful for us going forward.
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
From: Martin B Nielsen [martin@xxxxxxxxxxx]
Sent: Monday, August 26, 2013 1:13 PM To: Shain Miley Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: Hardware recommendations Hi Shain,
Those R515 seem to mimic our servers (2U supermicro w. 12x 3.5" bays and 2x 2.5" in the rear for OS).
Since we need a mix of SSD & platter we have 8x 4TB drives and 4x 500GB SSD + 2x 250GB SSD for OS in each node (2x 8-port LSI 2308 in IT-mode)
We've partitioned 10GB from each 4x 500GB to use as journal for 4x 4TB drives and each of the OS disks each hold 2x journals each for the remaining 4 platter disks.
We tested a lot how to put these journals and this setup seemed to fit best into our setup (pure VM block storage - 3x replica).
Everything connected via 10GbE (1 network for cluster, 1 for public) and 3 standalone monitor servers.
For storage nodes we use E5-2620/32gb ram, and monitor nodes E3-1260L/16gb ram - we've tested with both 1 and 2 nodes going down and starting redistributing data and they seem to cope more than fine.
Overall I find these nodes as a good compromise between capacity, price and performance - we looked into getting 2U servers with 8x 3.5" bays and get more of them, but ultimately went with this.
We also have some boxes from coraid (SR & SRX with and without flashcache/etherflash) so we've been able to do some direct comparison and so far ceph is looking good - especially price-storage ratio.
At any rate, back to your mail, I think the most important factor is looking at all the pieces and making sure you're not being [hard] bottlenecked somewhere - we found 24gb ram to be a little on the low side when all 12 disks started to redistribute,
but 32 is fine. Also not having journals on SSD before writing to platter really hurt a lot when we tested - this can prob. be mitigated somewhat with better raid controllers. CPU-wise the E5 2620 hardly breaks a sweat even when having to do just a little
with a node going down.
Good luck with your HW-adventure :).
Cheers,
Martin
On Mon, Aug 26, 2013 at 3:56 PM, Shain Miley
<SMiley@xxxxxxx> wrote:
Good morning, |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com