Yes I did, and despite "Too many repaired reads on 1 OSDs" health is
back to HEALTH_OK.
But it is second time it happened and do not know, should I go forward
with update or hold on. Or maybe it is a bad move makeing compaction
right after migration to 15.2.14
On 20.10.2021 o 09:21, Szabo, Istvan (Agoda) wrote:
Have you tried to repair pg?
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------
On 2021. Oct 20., at 9:04, Glaza <glaza2@xxxxx> wrote:
Email received from the internet. If in doubt, don't click any link
nor open any attachment !
________________________________
Hi Everyone, I am in the process of
upgrading nautilus (14.2.22) to octopus (15.2.14) on centos7 (Mon/Mgr
were additionally migrated to centos8 beforehand). Each day I upgraded
one host and after all osd's were up, I manually compacted them
one by
one. Today (8 hosts upgraded, 7 still to go) I started
getting errors like "Possible data damage: 1 pg
inconsistent". For the
first time it was "acting [56,58,62]" but I thought
"OK" in osd.62 logs
there are many lines like "osd.62 39892 class rgw_gc open got (1)
Operation not permitted" Maybe rgw did not cleaned some omaps
properly,
and ceph did not noticed it until scrub happened. But now I have got
"acting [56,57,58]" and none of this osd's has those
errors with rgw_gc
in logs. All affected osd's are octopus 15.2.14 on NVMe hosting
default.rgw.buckets.index pool. Has anyone experience with this
problem? Any help appreciated.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx