On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote: > Hi, > > > On 18 Mar 2015, at 05:29, Christian Balzer <chibi@xxxxxxx> wrote: > > > > > > Hello, > > > > On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote: > [snip] > >> We though of doing a cluster with 3 servers, and any recommendation of > >> supermicro servers would be appreciated. > >> > > Why 3, replication of 3? > > With Intel SSDs and diligent (SMART/NAGIOS) wear level monitoring I'd > > personally feel safe with a replication factor of 2. > > > I’ve seen recommendations of replication 2! The Intel SSDs are indeed > endurable. This is only with Intel SSDs I assume? >From the specifications and reviews I've seen the Samsung 845DC PRO, the SM 843T and even more so the SV843 (http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview don't you love it when the same company has different, competing products?) should do just fine when it comes to endurance and performance. Alas I have no first hand experience with either, just the (read-optimized) 845DC EVO. > This 1U > http://www.supermicro.com.tw/products/system/1U/1028/SYS-1028U-TR4T_.cfm > <http://www.supermicro.com.tw/products/system/1U/1028/SYS-1028U-TR4T_.cfm> > is really nice, missing the SuperDOM peripherals though.. While I certainly see use cases for SuperDOM, not all models have 2 connectors, so no chance to RAID1 things, thus the need to _definitely_ have to pull the server out (and re-install the OS) should it fail. > so you really > get 8 drives if you need two for OS. And the rails.. don’t get me > started, but lately they do just snap into the racks! No screws needed. > That’s a refresh from earlier 1U SM rails. > Ah, the only 1U servers I'm currently deploying from SM are older ones, so still no snap-in rails. Everything 2U has been that way for at least 2 years, though. ^^ Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com