Re: Building a Pb EC cluster for a cheaper cold storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



10.11.2015 19:40, Paul Evans пишет:
> Mike - unless things have changed in the latest versions(s) of Ceph, I *not* believe CRUSH will be successful in creating a valid PG map if the ’n' value is 10 (k+m), your host count is 6, and your failure domain is set to host.  You’ll need to increase your host count to match or exceed ’n', change the failure domain to OSD, or alter the k+m config to something that is more compatible to your host count…otherwise you’ll end up with incomplete PG’s.
> Also note that having more failure domains (i.e. - hosts) than your ’n’ value is recommended.
> 
> Beyond that, you’re likely to run operational challenges putting that many drives behind a single CPU-complex when the host count is quite low. My $.02.
> --
> Paul

Thanks, Paul!
I didn't mentioned about it! It's a gold $.02 from you :)

> 
> On Nov 10, 2015, at 2:29 AM, Mike Almateia <mike.almateia@xxxxxxxxx<mailto:mike.almateia@xxxxxxxxx>> wrote:
> 
> Hello.
> 
> For our CCTV storing streams project we decided to use Ceph cluster with EC pool.
> Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing,
> 99% write operations, a cluster must has grow up with out downtime.
> 
> By now our vision of architecture it like:
> * 6 JBOD with 90 HDD 8Tb capacity each (540 HDD total)
> * 6 Ceph servers connected to it own JBOD (we will have 6 pairs: 1 Server + 1 JBOD).
> 
> Ceph servers hardware details:
> * 2 x E5-2690v3 : 24 core (w/o HT), 2.6 Ghz each
> * 256 Gb RAM DDR4
> * 4 x 10Gbit/s NIC port (2 for Client network and 2 for Cluster Network)
> * servers also have 4 (8) x 2.5" HDD SATA on board for Cache Tiering Feature (because ceph clients can't directly talk with EC pool)
> * Two HBA SAS controllers work with multipathing feature, for HA scenario.
> * For Ceph monitor functionality 3 servers have 2 SSD in Software RAID1
> 
> Some Ceph configuration rules:
> * EC pools with K=7 and M=3
> * EC plugin - ISA
> * technique = reed_sol_van
> * ruleset-failure-domain = host
> * near full ratio = 0.75
> * OSD journal partition on the same disk
> 
> We think that first and second problems it will be CPU and RAM on Ceph servers.
> 
> Any ideas? it is can fly?
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux