Hello Stefan,
AMD EPYC
great choice!
Has anybody experience with the drives?
some of our customers have different toshiba MG06SCA drives and they work great according to them. Can't say for MG07ACA but to be honest, I don't think there should be a huge difference.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive Training at https://croit.io/training/4-days-ceph-in-depth-training.
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Am Fr., 10. Jan. 2020 um 17:32 Uhr schrieb Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>:
Hi,
we‘re currently in the process of building a new ceph cluster to backup rbd images from multiple ceph clusters.
We would like to start with just a single ceph cluster to backup which is about 50tb. Compression ratio of the data is around 30% while using zlib. We need to scale the backup cluster up to 1pb.
The workload on the original rbd images is mostly 4K writes so I expect rbd export-diff to do a lot of small writes.
The current idea is to use the following hw as a start:
6 Servers with:
1 AMD EPYC 7302P 3GHz, 16C/32T
128g Memory
14x 12tb Toshiba Enterprise MG07ACA HDD drives 4K native
Dual 25gb network
Does it fit? Has anybody experience with the drives? Can we use EC or do we need to use normal replication?
Greets,
Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com