Hi Vlad,
You can check this blog: http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters
Note! Be aware that these settings do not automatically cover a node failure.
Check out this thread why:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html
Kind regards,
Caspar
Caspar
Op do 4 okt. 2018 om 20:27 schreef Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>:
Hello
I have a 5-server cluster and I am wondering if it's possible to create
pool that uses k=5 m=2 erasure code. In my experiments, I ended up with
pools whose pgs are stuck in creating+incomplete state even when I
created the erasure code profile with --crush-failure-domain=osd.
Assuming that what I want to do is possible, will CRUSH distribute
chunks evenly among servers, so that if I need to bring one server down
(e.g. reboot), clients' ability to write or read any object would not be
disrupted? (I guess something would need to ensure that no server holds
more than two chunks of an object)
Thanks,
Vlad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com