Hi!
I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
To save space and have a RAIDZ2 (RAID6) like setup, I am planning the following profile:
ceph osd erasure-code-profile set myprofile \
k=3 \
m=2 \
ruleset-failure-domain=rack
I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
This storage is used for backup images (large sequential reads and writes).
To save space and have a RAIDZ2 (RAID6) like setup, I am planning the following profile:
ceph osd erasure-code-profile set myprofile \
k=3 \
m=2 \
ruleset-failure-domain=rack
Performance is not the first priority, this is why I do not plan to outsource WAL/DB (broken NVMe = broken OSDs is more administrative overhead then single OSDs).
Disks are attached by SAS multipath, throughput in general is no problem but I did not test with ceph yet.
Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it working well?
Thank you.
Kevin
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com