I would try to scale horizontally with smaller ceph nodes, so you have
the advantage of being able to choose an EC profile that does not
require too much overhead and you can use failure domain host.
Joachim
Am 09.01.2020 um 15:31 schrieb Wido den Hollander:
On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote:
Hi Wido,
Am 09.01.20 um 14:18 schrieb Wido den Hollander:
On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
Am 09.01.20 um 13:39 schrieb Janne Johansson:
I'm currently trying to workout a concept for a ceph cluster which can
be used as a target for backups which satisfies the following
requirements:
- approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
You might need to have a large (at least non-1) number of writers to get
to that sum of operations, as opposed to trying to reach it with one
single stream written from one single client.
We are aiming for about 100 writers.
So if I read it correctly the writes will be 64k each.
may be ;-) see below
That should be doable, but you probably want something like NVMe for DB+WAL.
You might want to tune that larger writes also go into the WAL to speed
up the ingress writes. But you mainly want more spindles then less.
I would like to give a little bit more insight about this and most
probobly some overhead we currently have in those numbers. Those values
come from our old classic raid storage boxes. Those use btrfs + zlib
compression + subvolumes for those backups and we've collected those
numbers from all of them.
The new system should just replicate snapshots from the live ceph.
Hopefully being able to use Erase Coding and compression? ;-)
Compression might work, but only if the data is compressable.
EC usually writes very fast, so that's good. I would recommend a lot of
spindles those. More spindles == more OSDs == more performance.
So instead of using 12TB drives you can consider 6TB or 8TB drives.
Wido
Greets,
Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com