Re: ceph health JSON format has changed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 	In previous versions of Ceph, I was able to determine which PGs had
> scrub errors, and then a cron.hourly script ran "ceph pg repair" for them,
> provided that they were not already being scrubbed. In Luminous, the bad
> PG is not visible in "ceph --status" anywhere. Should I use something like
> "ceph health detail -f json-pretty" instead?

'ceph pg ls inconsistent' lists all inconsistent PGs.

> 	Also, is it possible to configure Ceph to attempt repairing the bad PGs
> itself, as soon as the scrub fails? I run most of my OSDs on top of a bunch of
> old spinning disks, and a scrub error almost always means that there is a bad
> sector somewhere, which can easily be fixed by rewriting the lost data using
> "ceph pg repair".

I don't know of a good way to repair inconsistencies automatically from within Ceph. However, I seem to remember someone saying that with BlueStore OSDs, read errors are attempted to be fixed (by rewriting the unreadable replica/shard) when they are discovered during client reads. And there was a potential plan to do the same if they are discovered during scrubbing. I can't remember the details (this was a while ago, at Cephalocon APAC), so I may be completely off the mark here. 

Cheers,
Tom
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux