Resolving a pg inconsistent Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ceph users,

We are running a suse SES 5.5 cluster that's largely based on luminous with
some mimic backports.

We've been doing some large reshuffling from adding in additional OSDs and
during this process we have an inconsistent pg group, investigation
suggests there was a read error.

We would like to target it with a deep-scrub and possibly repair attempt
before marking the individual osd down and replacing the drive, but because
of the weeks of reshuffling, our cluster is behind on deep-scrubs and it
seemingly is refusing to scrub the marked pg, or at least it is scheduling
other pg for deep-scrubs first.

Is there any method we can use to grant deep-scrub priority? How about
adjusting deep-scrub timeframes to be 6 months temporarily, would that
allow us to force the desired pg deep-scrub to occur prior? Or could set
some of the no scrub flags and proceed with a pg repair attempt?

Thank you for any suggestions or advice,

-- 
Steven Pine
webair.com
*P*  516.938.4100 x
*E * steven.pine@xxxxxxxxxx


   <https://www.facebook.com/WebairInc/>
<https://www.linkedin.com/company/webair>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux