Manual pg repair help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Is there no ceph wiki page with examples of manual repairs with the 
ceph-objectstore-tool (eg. where pg repair and pg scrub don’t work)


I am having this issue for quite some time. 

2019-09-02 14:17:34.175139 7f9b3f061700 -1 log_channel(cluster) log 
[ERR] : deep-scrub 17.36 
17:6ca1f70a:::rbd_data.1f114174b0dc51.0000000000000974:head : expected 
clone 17:6ca1f70a:::rbd_data.1f114174b0dc51.0000000000000974:4 1 missing

And tried to resolve it according to this procedure[0] but now I am 
getting the message 

ceph-objectstore-tool --dry-run --type bluestore --data-path 
/var/lib/ceph/osd/ceph-29 --pgid 17.36 
'{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","snapid":-2,
"hash":1357874486,"pool":17,"namespace":"","max":0}' remove

Snapshots are present, use removeall to delete everything


I am not sure about this removeall, but I do not want to start deleting 
snapshots hoping it will amount to something. Besides if only maybe 4mb 
block is damaged, do you really need to purge snapshots of 40GB. I 
rather have a snapshot of 40GB missing 4MB than having no snapshot at 
all.


[0]
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg47218.html

PS. Is there a record of who is having the longest unhealthy cluster 
state? Because I would not like it to be me ;)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux