I've just seen this when *removing* an OSD too.
Issue resolved itself during recovery. OSDs were not full, not even
close, there's virtually nothing on this cluster.
Mimic 13.2.4 on RHEL 7.6. OSDs are all Bluestore HDD with SSD DBs.
Everything is otherwise default.
cluster:
id: MY ID
health: HEALTH_ERR
1161/66039 objects misplaced (1.758%)
Degraded data redundancy: 220095/66039 objects degraded
(333.280%), 137 pgs degraded
Degraded data redundancy (low space): 1 pg backfill_toofull
services:
mon: 3 daemons, quorum san2-mon1,san2-mon2,san2-mon3
mgr: san2-mon1(active), standbys: san2-mon2, san2-mon3
osd: 53 osds: 52 up, 52 in; 186 remapped pgs
data:
pools: 16 pools, 2016 pgs
objects: 22.01 k objects, 83 GiB
usage: 7.9 TiB used, 473 TiB / 481 TiB avail
pgs: 220095/66039 objects degraded (333.280%)
1161/66039 objects misplaced (1.758%)
1830 active+clean
134 active+recovery_wait+undersized+degraded+remapped
45 active+remapped+backfill_wait
3 active+recovering+undersized+remapped
3 active+recovery_wait+undersized+degraded
1 active+remapped+backfill_wait+backfill_toofull
io:
client: 60 KiB/s wr, 0 op/s rd, 5 op/s wr
recovery: 8.6 MiB/s, 110 objects/s
On 07/02/2019 04:26, Brad Hubbard wrote:
Let's try to restrict discussion to the original thread
"backfill_toofull while OSDs are not full" and get a tracker opened up
for this issue.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com