Re: 1 pg inconsistent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you run (Substitute your pool name for <pool>):

rados -p <pool> list-inconsistent-obj 1.574 --format=json-pretty

You should get some detailed information about which piece of data actually has the error and you can determine what to do with it from there.

-----Original Message-----
From: Abhimnyu Dhobale <adhobale8@xxxxxxxxx> 
Sent: Tuesday, July 14, 2020 5:13 AM
To: ceph-users@xxxxxxx
Subject:  1 pg inconsistent

Good Day,

Ceph is showing below error frequently. every time after pg repair it is resolved.

[root@vpsapohmcs01 ~]# ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 1.574 is active+clean+inconsistent, acting [19,25,2]

[root@vpsapohmcs02 ~]# cat /var/log/ceph/ceph-osd.19.log | grep error
2020-07-12 11:42:11.824 7f864e0b2700 -1 log_channel(cluster) log [ERR] :
1.574 shard 25 soid
1:2ea0a7a3:::rbd_data.515c96b8b4567.0000000000007a7c:head : candidate had a read error
2020-07-12 11:42:15.035 7f86520ba700 -1 log_channel(cluster) log [ERR] :
1.574 deep-scrub 1 errors

[root@vpsapohmcs01 ~]# ceph --version
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic
(stable)

Request you to please suggest.

--
Thanks & Regards
Abhimnyu Dhobale
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux