12-Nov-15 03:33, Mike Axford пишет:
On 10 November 2015 at 10:29, Mike Almateia <mike.almateia@xxxxxxxxx> wrote:
Hello.
For our CCTV storing streams project we decided to use Ceph cluster with EC
pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30
day storing,
99% write operations, a cluster must has grow up with out downtime.
By now our vision of architecture it like:
* 6 JBOD with 90 HDD 8Tb capacity each (540 HDD total)
* 6 Ceph servers connected to it own JBOD (we will have 6 pairs: 1 Server +
1 JBOD).
8TB drives. Are you thinking of using SMR drives?
If you are looking at SMR drives be aware that these are very
different to traditional drives and may cause you additional issues.
We've gone down that route (based on some early, and probably flawed
testing) and the throughputs we can get are much lower than more
traditional drives. We've also had various stability issues with 70
such drives in similarly specified systems with standard replication
(no EC pools).
As others have said fewer disks per node and more nodes would be
sensible, and if you're thinking of using SMR drives I'd suggest
re-considering or doing a lot of good testing first. In particular how
well they work in your planned setup as you get more data in the
cluster and how they work with normal workloads + scrubbing +
recovery.
No, we thinking about HGST Ultrastar He8 8TB Dual SAS.
Thanks for sharing your the opinion about SMR drivers.
--
Mike, runs.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com