Re: inconsistent pg after upgrade nautilus to octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes I did, and despite "Too many repaired reads on 1 OSDs" health is back to HEALTH_OK. But it is second time it happened and do not know, should I go forward with update or hold on. Or maybe it is a bad move makeing compaction right after migration to 15.2.14

On 20.10.2021 o 09:21, Szabo, Istvan (Agoda) wrote:
Have you tried to repair pg?

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

On 2021. Oct 20., at 9:04, Glaza <glaza2@xxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi Everyone,   I am in the process of
upgrading nautilus (14.2.22) to octopus (15.2.14) on centos7 (Mon/Mgr
were additionally migrated to centos8 beforehand). Each day I upgraded
one host and after all osd&#39;s were up, I manually compacted them one by
one.  Today (8 hosts upgraded, 7 still to go) I started
getting errors like &#34;Possible data damage: 1 pg inconsistent&#34;. For the first time it was &#34;acting [56,58,62]&#34; but I thought &#34;OK&#34; in osd.62 logs
there are many lines like &#34;osd.62 39892 class rgw_gc open got (1)
Operation not permitted&#34; Maybe rgw did not cleaned some omaps properly,
and ceph did not noticed it until scrub happened. But now I have got
&#34;acting [56,57,58]&#34; and none of this osd&#39;s has those errors with rgw_gc
in logs. All affected osd&#39;s are octopus 15.2.14 on NVMe hosting
default.rgw.buckets.index pool.  Has anyone experience with this problem?  Any help appreciated.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux