Re: 1 pg inconsistent and does not recover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/28/23 09:41, Frank Schilder wrote:
Hi Niklas,

please don't do any of the recovery steps yet! Your problem is almost certainly a non-issue. I had a failed disk with 3 scrub-errors, leading to the candidate read error messeges you have:

ceph status/df/pool stats/health detail at 00:00:06:
   cluster:
     health: HEALTH_ERR
             3 scrub errors
             Possible data damage: 3 pgs inconsistent

After rebuilding the data, it still looked like:

   cluster:
     health: HEALTH_ERR
             2 scrub errors
             Possible data damage: 2 pgs inconsistent

What's the issue here? The issue is that the OGs have not been deep-scrubbed after rebuild. The reply "no scrub data available" of the list-inconsistent is the clue. The response to that is not to try manual repair but to issue a deep-scrub.

Unfortunately, the command "ceph pg deep-scrub ..." does not really work, the deep scrub reservation almost always gets cancelled very quickly.

On what Ceph version do you have this issue? We use this command everyday, hunderds of times, and it always works.

Or is this an issue when you have a degraded cluster?

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux