ceph status
ceph osd tree
Is your meta pool on ssds instead of the same root and osds as the rest of the cluster?
_______________________________________________Hello,
I am getting the below error and I am unable to get them resolved even after starting and stopping the OSD’s. All the OSD’s seems to be up.
How do I repair the OSD’s or fix them manually. I am using cephFS. But oddly the ceph df is showing 100% used(which is showing in KB). But the pool is 1886G(with 3 copies). I can still write to the ceph FS without any issue. Not sure why is CEPH reporting the wrong info of 100% full
ceph version 10.2.7
health HEALTH_WARN
300 pgs degraded
300 pgs stuck degraded
300 pgs stuck unclean
300 pgs stuck undersized
300 pgs undersized
recovery 28/19674 objects degraded (0.142%)
recovery 56/19674 objects misplaced (0.285%)
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
5463T 5462T 187G 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
Pool1 15 233M 0 1820T 3737
Pool2 16 0 0 1820T 0
PoolMeta 17 34719k 100.00 0 28
Any help is appreciated
--
Deepak
This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com