Re: Building a Pb EC cluster for a cheaper cold storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10 November 2015 at 10:29, Mike Almateia <mike.almateia@xxxxxxxxx> wrote:
> Hello.
>
> For our CCTV storing streams project we decided to use Ceph cluster with EC
> pool.
> Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30
> day storing,
> 99% write operations, a cluster must has grow up with out downtime.
>
> By now our vision of architecture it like:
> * 6 JBOD with 90 HDD 8Tb capacity each (540 HDD total)
> * 6 Ceph servers connected to it own JBOD (we will have 6 pairs: 1 Server +
> 1 JBOD).

8TB drives. Are you thinking of using SMR drives?

If you are looking at SMR drives be aware that these are very
different to traditional drives and may cause you additional issues.
We've gone down that route (based on some early, and probably flawed
testing) and the throughputs we can get are much lower than more
traditional drives. We've also had various stability issues with 70
such drives in similarly specified systems with standard replication
(no EC pools).

As others have said fewer disks per node and more nodes would be
sensible, and if you're thinking of using SMR drives I'd suggest
re-considering or doing a lot of good testing first. In particular how
well they work in your planned setup as you get more data in the
cluster and how they work with normal workloads + scrubbing +
recovery.

-- 
Mike Axford:
Infrastructure Operations
LiveLink Technology Ltd
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux