Hi,
running Ceph 16.2.7 on a pure NVME Cluster with 9 nodes I am
experiencing "Reduced data availability: 448 pgs inactive".
I cannot see any statistics or pool information with "ceph -s".
The RBDs are still operational and "ceph report" shows the osds as
expected.
I am wondering how to further debug this situation.
cluster:
id: 0393d3c0-8788-4b9f-XXXX-YYYYYYYYYYYY
health: HEALTH_WARN
Reduced data availability: 448 pgs inactive
448 pgs not deep-scrubbed in time
448 pgs not scrubbed in time
services:
mon: 6 daemons, quorum hbase10,hbase11,hbase13,hbase16,hbase17,hbase18
(age 2h)
mgr: hbase14(active, since 11d), standbys: hbase15, hbase17, hbase16,
hbase18, hbase12, hbase11, hbase13, hbase10
mds: 1/1 daemons up, 5 standby
osd: 7 osds: 6 up (since 2h), 6 in (since 6w)
data:
volumes: 1/1 healthy
pools: 4 pools, 448 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
448 unknown
--
Kind Regards
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx