Re: SSD Hardware recommendation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 19 Mar 2015, at 08:17, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote:
> 
>> Hi,
>> 
>>> On 18 Mar 2015, at 05:29, Christian Balzer <chibi@xxxxxxx> wrote:
>>> 
>>> 
>>> Hello,
>>> 
>>> On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
>> 
> [snip]
>>>> We though of doing a cluster with 3 servers, and any recommendation of
>>>> supermicro servers would be appreciated.
>>>> 
>>> Why 3, replication of 3? 
>>> With Intel SSDs and diligent (SMART/NAGIOS) wear level monitoring I'd
>>> personally feel safe with a replication factor of 2.
>>> 
>> I’ve seen recommendations  of replication 2!  The Intel SSDs are indeed
>> endurable. This is only with Intel SSDs I assume?
> 
> From the specifications and reviews I've seen the Samsung 845DC PRO, the
> SM 843T and even more so the SV843 
> (http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview
> don't you love it when the same company has different, competing
> products?) should do just fine when it comes to endurance and performance.
> Alas I have no first hand experience with either, just the
> (read-optimized) 845DC EVO.
> 
The 845DC Pro does look really nice, comparable with s3700 with TDW even.
The price is what really does it, as it’s almost a third compared with s3700..

With replication set of 3 it’s the same price as s3610 with replication set of 2.

How enterprise-ish is it to run with replication set of 2 according to the Inktank-guys?

Really thinking of going with 845DC Pro here actually.
> 
>> This 1U
>> http://www.supermicro.com.tw/products/system/1U/1028/SYS-1028U-TR4T_.cfm
>> <http://www.supermicro.com.tw/products/system/1U/1028/SYS-1028U-TR4T_.cfm>
>> is really nice, missing the SuperDOM peripherals though.. 
> While I certainly see use cases for SuperDOM, not all models have 2
> connectors, so no chance to RAID1 things, thus the need to _definitely_
> have to pull the server out (and re-install the OS) should it fail.
Yeah, I fancy using hot swap for OS disks, and with 24 front hot swap there’s plenty room to have a couple of OS drives =)
The 2U also has possibility to have an extra 2x10GbE-card totalling in 4x10GbE, which is needed.
> 
>> so you really
>> get 8 drives if you need two for OS. And the rails.. don’t get me
>> started, but lately they do just snap into the racks! No screws needed.
>> That’s a refresh from earlier 1U SM rails.
>> 
> Ah, the only 1U servers I'm currently deploying from SM are older ones, so
> still no snap-in rails. Everything 2U has been that way for at least 2
> years, though. ^^
It’s awesome I tell you. :)

Cheers,
Josef

> 
> Christian
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
> http://www.gol.com/

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux