ceph boostrap initialization :: nvme drives not empty after >12h

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi! yesterday i bootstrapped (with cephadm) my first ceph installation and things looked somehow ok .. but today the osds are not yet ready and i have in dashboard this warnings:
MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
PG_AVAILABILITY: Reduced data availability: 64 pgs inactive
PG_DEGRADED: Degraded data redundancy: 2/14 objects degraded (14.286%), 66 pgs undersized
TOO_FEW_OSDS: OSD count 2 < osd_pool_default_size 3

and in logs:
3/12/21 12:18:19 PM
[INF]
OSD <1> is not empty yet. Waiting a bit more

3/12/21 12:18:19 PM
[INF]
OSD <0> is not empty yet. Waiting a bit more

3/12/21 12:18:19 PM
[INF]
Can't even stop one OSD. Cluster is probably busy. Retrying later..

3/12/21 12:18:19 PM
[ERR]
cmd: osd ok-to-stop failed with: 31 PGs are already too degraded, would become too degraded or might become unavailable. (errno:-16)

this is a single node, whole package ceph install with 2 local nvme drives as osds (to be used 2x replicated like a raid1 array)

So, can anyone tell me what is going on?

Thanks a lot!!
Adrian

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux