Hi,
are the new OSDs in the same root and is it the same device class? Can
you share the output of ‚ceph osd df tree‘?
Zitat von Dallas Jones <djones@xxxxxxxxxxxxxxxxx>:
My 3-node Ceph cluster (14.2.4) has been running fine for months. However,
my data pool became close to full a couple of weeks ago, so I added 12 new
OSDs, roughly doubling the capacity of the cluster. However, the pool size
has not changed, and the health of the cluster has changed for the worse.
The dashboard shows the following cluster status:
- PG_DEGRADED_FULL: Degraded data redundancy (low space): 2 pgs
backfill_toofull
- POOL_NEARFULL: 6 pool(s) nearfull
- OSD_NEARFULL: 1 nearfull osd(s)
Output from ceph -s:
cluster:
id: e5a47160-a302-462a-8fa4-1e533e1edd4e
health: HEALTH_ERR
1 nearfull osd(s)
6 pool(s) nearfull
Degraded data redundancy (low space): 2 pgs backfill_toofull
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 5w)
mgr: ceph01(active, since 4w), standbys: ceph03, ceph02
mds: cephfs:1 {0=ceph01=up:active} 2 up:standby
osd: 33 osds: 33 up (since 43h), 33 in (since 43h); 1094 remapped pgs
rgw: 3 daemons active (ceph01, ceph02, ceph03)
data:
pools: 6 pools, 1632 pgs
objects: 134.50M objects, 7.8 TiB
usage: 42 TiB used, 81 TiB / 123 TiB avail
pgs: 213786007/403501920 objects misplaced (52.983%)
1088 active+remapped+backfill_wait
538 active+clean
4 active+remapped+backfilling
2 active+remapped+backfill_wait+backfill_toofull
io:
recovery: 477 KiB/s, 330 keys/s, 29 objects/s
Can someone steer me in the right direction for how to get my cluster
healthy again?
Thanks in advance!
-Dallas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx