"Don't run with replication 1 ever".
Even if this is a test, it tests something for which a resilient cluster is specifically designed to avoid.
As for enumerating what data is missing, it would depend on if the pool(s) had cephfs, rbd images or rgw data in them.
When this kind of data loss happens to you, you restore from your backups.
Den mån 13 aug. 2018 kl 14:26 skrev Surya Bala <sooriya.balan@xxxxxxxxx>:
Any suggestion on this pleaseRegardsSurya Balan_______________________________________________On Fri, Aug 10, 2018 at 11:28 AM, Surya Bala <sooriya.balan@xxxxxxxxx> wrote:Hi folks,I was trying to test the below caseHaving pool with replication count as 1 and if one osd goes down, then the PGs mapped to that OSD become stale.If the hardware failure happen then the data in that OSD lost. So some parts of some files are lost . How can i find what are the files which got currupted.RegardsSurya Balan
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
May the most significant bit of your life be positive.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com