MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs PG_AVAILABILITY: Reduced data availability: 64 pgs inactivePG_DEGRADED: Degraded data redundancy: 2/14 objects degraded (14.286%), 66 pgs undersized
TOO_FEW_OSDS: OSD count 2 < osd_pool_default_size 3 and in logs: 3/12/21 12:18:19 PM [INF] OSD <1> is not empty yet. Waiting a bit more 3/12/21 12:18:19 PM [INF] OSD <0> is not empty yet. Waiting a bit more 3/12/21 12:18:19 PM [INF] Can't even stop one OSD. Cluster is probably busy. Retrying later.. 3/12/21 12:18:19 PM [ERR]cmd: osd ok-to-stop failed with: 31 PGs are already too degraded, would become too degraded or might become unavailable. (errno:-16)
this is a single node, whole package ceph install with 2 local nvme drives as osds (to be used 2x replicated like a raid1 array)
So, can anyone tell me what is going on? Thanks a lot!! Adrian
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx