Okay, so your applied crush rule has failure domain „room“ which you
have three of, but the third has no OSDs available. Check your osd
tree output, that’s why ceph fails to create a third replica. To
resolve this you can either change the rule to a different failure
domain (for example „host“) and then increase the size. Or you create
a new rule and apply it to the pool(s). Either way you’ll have to
decide how to place three replicas, e. g. move hosts within the crush
tree (and probably the third room into the default root) to enable an
even distribution. Note that moving buckets within the crush tree will
cause rebalancing.
Regards,
Eugen
Zitat von stefan.pinter@xxxxxxxxxxxxxxxx:
sure!
ceph osd pool ls detail
https://privatebin.net/?85105578dd50f65f#4oNunvNfLoNbnqJwuXoWXrB1idt4zMGnBXdQ8Lkwor8p
i guess this needs some cleaning up regarding snapshots - could this
be a problem?
ceph osd crush rule dump
https://privatebin.net/?bd589bc9d7800dd3#3PFS3659qXqbxfaXSUcKot3ynmwRG2mDjpxSmhCxQzAB
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx