5Tb useful space based on Erasure Coded Pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Cephers!
I have interesting a task from our client.
The client have 3000+ video cams (monitoring streets, porchs,  entrance,
etc.), we need store data from these cams for 30 days.

Each cam generating 1.3Tb data for 30 days, total bandwidth is 14Gbit/s.
In total we need ( 1.3+3000 ) ~4Pb+ data on storage plus 20% for
recovery if one jbod fail.

Quantity of cams can increase in time.

Another thing to keep in mind is to make cheaper storage.

My points of view:
* Make pair with ceph server + fat jbod
* Make ~15 pairs
* On jbods make erasure coded pool with reasonable fault domain
* On ceph server make read only cache tiring, because erasure coded pool
can't be directly access from clients.

Hardware:
Ceph server
* 2 x e5-2690v3 Xeon (may be 2697)
* 256Gb RAM
* some Intel SSD DCS36xxx series
* 2 x Dualport 10Gbit/s NIC (may be 1 x dualport 10Gbit plus 1 x
Dualport 40Gbit/s for storage network)
* 2 x 4 SAS external port HBA SAS controllers

JBOD
* DATAon DNS-2670/DNS-2684 each can carry 70 or 84 drives or Supermicro
946ED-R2KJBOD that can carry 90 drives.

Ceph settings
* Use lrc plugin (?), with k=6, m=3, l=3, ruleset-failure-domain=host,
ruleset-locality=rack

I have not yet learned much about the difference erasure plugins,
performance, low level configuration.

Have you some advice about it? It's can work at all or not? Erasure and
this implementation it Ceph can solve the task?

For any advice thanks.

--
Mike, yes.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux