Re: Again: full ssd ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/10/2014 04:08 PM, Mike wrote:
> Hello all!
> Some our customer asked for only ssd storage.
> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
> 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
> x mellanox 40Gbit ConnectX-3 (maybe newer ConnectX-4 100Gbit) and Xeon
> e5-2660V2 with 64Gb RAM.
> Replica is 2.
> Or something like that but in 1U w/ 8 SSD's.

I would recommend 1U with 8 SSDs. Such huge machines with so many SSDs
will require some serious bandwidth and CPU power.

It's better to go for more, but smaller, machines. Your cluster will
suffer less from loosing a machine.

> 
> We see a little bottle neck on network cards, but the biggest question
> can ceph (giant release) with sharding io and new cool stuff release
> this potential?
> 
> Any ideas?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux