Hi
With the help of Dan van der Steri i managed to confirm my suspicions
that the problem with osdmap not trimming correctly was caused by PGs
that somehow wasn't marked as created by MONs.
For example, from log:
2020-11-16 12:57:00.514 7f131496f700 10 mon.monb01@0(probing).osd e72792
update_creating_pgs will instruct osd.265 to create 28.3ff@67698
2020-11-16 12:57:25.982 7f1315971700 10 mon.monb01@0(leader).osd e72792
update_creating_pgs will instruct osd.265 to create 28.3ff@72792
But:
root@monb01:/var/log/ceph# ceph pg dump |grep 28.3ff
dumped all
28.3ff 3841 0 0 0 0
15970230272 0 0 3028 3028
active+clean 2020-11-16 05:38:27.338826 72792'87928
72792:335764 [265,277,282] 265 [265,277,282] 265
72792'85741 2020-11-16 05:38:27.338783 72588'79082 2020-11-10
18:42:43.182436 0
root@monb01:/var/log/ceph#
root@monb01:/var/log/ceph# ceph health detail
HEALTH_OK
root@monb01:/var/log/ceph#
Does anyone know how to fix this? I found that it can be cleared by
deleting and recreating that pool, but there's a lot of data there so is
there way to fix this some other way?
--
Best regards
Marcin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx