Re: How to recover from active+clean+inconsistent+failed_repair?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm, I'm getting a bit confused. Could you also send the output of "ceph osd pool ls detail".

Did you look at the disk/controller cache settings?

I think you should start a deep-scrub with "ceph pg deep-scrub 3.b" and record the output of "ceph -w | grep '3\.b'" (note the single quotes).

The error messages you included in one of your first e-mails are only on 1 out of 3 scrub errors (3 lines for 1 error). We need to find all 3 errors.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Sagara Wijetunga <sagarawmw@xxxxxxxxx>
Sent: 02 November 2020 14:25:08
To: ceph-users@xxxxxxx; Frank Schilder
Subject: Re:  Re: How to recover from active+clean+inconsistent+failed_repair?

Hi Frank


> the primary OSD is probably not listed as a peer. Can you post the complete output of

> - ceph pg 3.b query
> - ceph pg dump
> - ceph osd df tree

> in a pastebin?

Yes, the Primary OSD is 0.

I have attached above as .txt files. Please let me know if you still cannot read them.

Regards

Sagara

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux