active+clean+inconsistent and pg repair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Ceph status is showing:

1 pgs inconsistent
1 scrub errors
1 active+clean+inconsistent

I located the error messages in the logfile after querying the pg in question:

root@hqosd3:/var/log/ceph# zgrep -Hn 'ERR' ceph-osd.32.log.1.gz

ceph-osd.32.log.1.gz:846:2017-03-17 02:25:20.281608 7f7744d7f700 -1 log_channel(cluster) log [ERR] : 3.2b8 shard 32: soid 3/4650a2b8/rb.0.fe307e.238e1f29.00000076024c/head candidate had a read error, data_digest 0x84c33490 != known data_digest 0x974a24a7 from auth shard 62                                                                                                        

ceph-osd.32.log.1.gz:847:2017-03-17 02:30:40.264219 7f7744d7f700 -1 log_channel(cluster) log [ERR] : 3.2b8 deep-scrub 0 missing, 1 inconsistent objects                                     

ceph-osd.32.log.1.gz:848:2017-03-17 02:30:40.264307 7f7744d7f700 -1 log_channel(cluster) log [ERR] : 3.2b8 deep-scrub 1 errors

Is this a case where it would be safe to use 'ceph pg repair'?

The documentation indicates there are times where running this command is less safe than others...and I would like to be sure before I do so.

Thanks,
Shain


-- 
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux