Building a Pb EC cluster for a cheaper cold storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

For our CCTV storing streams project we decided to use Ceph cluster with EC pool. Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing,
99% write operations, a cluster must has grow up with out downtime.

By now our vision of architecture it like:
* 6 JBOD with 90 HDD 8Tb capacity each (540 HDD total)
* 6 Ceph servers connected to it own JBOD (we will have 6 pairs: 1 Server + 1 JBOD).

Ceph servers hardware details:
* 2 x E5-2690v3 : 24 core (w/o HT), 2.6 Ghz each
* 256 Gb RAM DDR4
* 4 x 10Gbit/s NIC port (2 for Client network and 2 for Cluster Network)
* servers also have 4 (8) x 2.5" HDD SATA on board for Cache Tiering Feature (because ceph clients can't directly talk with EC pool)
* Two HBA SAS controllers work with multipathing feature, for HA scenario.
* For Ceph monitor functionality 3 servers have 2 SSD in Software RAID1

Some Ceph configuration rules:
* EC pools with K=7 and M=3
* EC plugin - ISA
* technique = reed_sol_van
* ruleset-failure-domain = host
* near full ratio = 0.75
* OSD journal partition on the same disk

We think that first and second problems it will be CPU and RAM on Ceph servers.

Any ideas? it is can fly?



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux