Hi Wido, Am 09.01.20 um 14:18 schrieb Wido den Hollander: > > > On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote: >> >> Am 09.01.20 um 13:39 schrieb Janne Johansson: >>> >>> I'm currently trying to workout a concept for a ceph cluster which can >>> be used as a target for backups which satisfies the following >>> requirements: >>> >>> - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s >>> >>> >>> You might need to have a large (at least non-1) number of writers to get >>> to that sum of operations, as opposed to trying to reach it with one >>> single stream written from one single client. >> >> >> We are aiming for about 100 writers. > > So if I read it correctly the writes will be 64k each. may be ;-) see below > That should be doable, but you probably want something like NVMe for DB+WAL. > > You might want to tune that larger writes also go into the WAL to speed > up the ingress writes. But you mainly want more spindles then less. I would like to give a little bit more insight about this and most probobly some overhead we currently have in those numbers. Those values come from our old classic raid storage boxes. Those use btrfs + zlib compression + subvolumes for those backups and we've collected those numbers from all of them. The new system should just replicate snapshots from the live ceph. Hopefully being able to use Erase Coding and compression? ;-) Greets, Stefan _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com