PGs inconsistent, do I fear data loss?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
we recently upgraded two clusters to Ceph luminous with bluestore and we discovered that we have many more pgs in state active+clean+inconsistent. (Possible data damage, xx pgs inconsistent)
 
This is probably due to checksums in bluestore that discover more errors.

We have some pools with replica 2 and some with replica 3.

I have read past forums thread and I have seen that Ceph do not repair automatically inconsistent pgs.

Even manual repair sometime fails. 

I would like to understand if I am losing my data:

- with replica 2 I hope that ceph chooses right replica looking at checksums
- with replica 3 I hope that there are no problems at all

How can I tell ceph to simply create the second replica in another place?

Because I suppose that with replica 2 and inconsistent pgs I have only one copy of data.

Thank you in advance for any help.

Mario





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux