I have probably misunterstood how to create erasure coded pools so I may be in need of some theory and appreciate if you can point me to documentation that may clarify my doubts.
I have so far 1 cluster with 3 hosts and 30 OSDs (10 each host).
I tried to create an erasure code profile like so:
"
# ceph osd erasure-code-profile get ec4x2rs
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8
"
If I create a pool using this profile or any profile where K+M > hosts , then the pool gets stuck.
"
# ceph -s
cluster:
id: eb4aea44-0c63-4202-b826-e16ea60ed54d
health: HEALTH_WARN
Reduced data availability: 16 pgs inactive, 16 pgs incomplete
2 pools have too many placement groups
too few PGs per OSD (4 < min 30)
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 11d)
mgr: ceph01(active, since 74m), standbys: ceph03, ceph02
osd: 30 osds: 30 up (since 2w), 30 in (since 2w)
data:
pools: 11 pools, 32 pgs
objects: 0 objects, 0 B
usage: 32 GiB used, 109 TiB / 109 TiB avail
pgs: 50.000% pgs not active
16 active+clean
16 creating+incomplete
# ceph osd pool ls
test_ec
test_ec2
"
The pool will never leave this "creating+incomplete" state.
The pools were created like this:
"
# ceph osd pool create test_ec2 16 16 erasure ec4x2rs
# ceph osd pool create test_ec 16 16 erasure
"
The default profile pool is created correctly.
My profiles are like this:
"
# ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_van
# ceph osd erasure-code-profile get ec4x2rs
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8
"
From what I've read it seems to be possible to create erasure code pools with higher than hosts K+M. Is this not so?
What am I doing wrong? Do I have to create any special crush map rule?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com