How safe is ceph pg repair these days?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a 3 replica cluster. A couple times I have run into inconsistent
PGs. I googled it and ceph docs and various blogs say run a repair
first. But a couple people on IRC and a mailing list thread from 2015
say that ceph blindly copies the primary over the secondaries and calls
it good. 

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001370.html

I sure hope that isn't the case. If so it would seem highly
irresponsible to implement such a naive command called "repair". I have
recently learned how to properly analyze the OSD logs and manually fix
these things but not before having run repair on a dozen inconsistent
PGs. Now I'm worried about what sort of corruption I may have
introduced. Repairing things by hand is a simple heuristic based on
comparing the size or checksum (as indicated by the logs) for each of
the 3 copies and figuring out which is correct. Presumably matching two
out of three should win and the odd object out should be deleted since
having the exact same kind of error on two different OSDs is highly
improbable. I don't understand why ceph repair wouldn't have done this
all along.

What is the current best practice in the use of ceph repair?

Thanks!

-- 
Tracy Reed
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170217/1d713c7d/attachment.pgp>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux