Hello Stefan, I run this command yesterday but the status not changed. Other pgs with status "inconsistent" was repaired after a day, but in this case, not works. instructing pg 32.15c on osd.49 to repair Normally, the pg will changed to repair but not. ________________________________ De: Stefan Kooman <stefan@xxxxxx> Enviado: lunes, 26 de junio de 2023 11:27 Para: Jorge JP <jorgejp@xxxxxxxxxx>; ceph-users@xxxxxxx <ceph-users@xxxxxxx> Asunto: Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent On 6/26/23 08:38, Jorge JP wrote: > Hello, > > After deep-scrub my cluster shown this error: > > HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 2/77158878 objects degraded (0.000%), 1 pg degraded > [WRN] OBJECT_UNFOUND: 1/38578006 objects unfound (0.000%) > pg 32.15c has 1 unfound objects > [ERR] OSD_SCRUB_ERRORS: 1 scrub errors > [ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent > pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting [49,47], 1 unfound > [WRN] PG_DEGRADED: Degraded data redundancy: 2/77158878 objects degraded (0.000%), 1 pg degraded > pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting [49,47], 1 unfound > > > I searching in internet how it solves, but I'm confusing.. > > Anyone can help me? Does "ceph pg repair 32.15c" work for you? Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx