I think a few other things that could help would be `ceph osd df tree` which will show the hierarchy across different crush domains. And if you’re doing something like erasure coded pools, or something other than replication 3, maybe `ceph osd crush rule dump` may provide some further context with the tree output. Also, the cluster is running Luminous (12) which went EOL 3 years ago tomorrow. So there are also likely a good bit of improvements all around under the hood to be gained by moving forward from Luminous. Though, I would say take care of the scrub errors prior to doing any major upgrades, as well as checking your upgrade path (can only upgrade two releases at a time, if you have filestore OSDs, etc). -Reed
|
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx