Recommended procedure in case of OSD_SCRUB_ERRORS / PG_DAMAGED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(17.2.4, 3 replicated, Container install)

Hello,

since many of the information found in the WWW or books is outdated, I want
to ask which procedure is recommended to repair damaged PG with status
active+clean+inconsistent for Ceph Quincy.

IMHO, the best process for a pool with 3 replicas it would be to check if
two of the replicas are identical and replace the third different one.

If I understand it correctly, the ceph-objectstore-tool could be used for
this approach, but unfortunately it is difficult even to start, especially
in a Docker environment. (OSD have to marked as "down", the Ubuntu package
ceph-osd, where ceph-objectstore-tool is included, starts server processes
which confuse the dockerized environment).

Is “ceph pg repair” safe to use, and is there a risk to enable
osd_scrub_auto_repair and osd_repair_during_recovery?

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux