Re: Stale PG data loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Don't run with replication 1 ever".

Even if this is a test, it tests something for which a resilient cluster is specifically designed to avoid.
As for enumerating what data is missing, it would depend on if the pool(s) had cephfs, rbd images or rgw data in them.

When this kind of data loss happens to you, you restore from your backups.




Den mån 13 aug. 2018 kl 14:26 skrev Surya Bala <sooriya.balan@xxxxxxxxx>:
Any suggestion on this please

Regards
Surya Balan

On Fri, Aug 10, 2018 at 11:28 AM, Surya Bala <sooriya.balan@xxxxxxxxx> wrote:
Hi folks,

 I was trying to test the below case 

Having pool with replication count as 1 and if one osd goes down, then the PGs mapped to that OSD become stale. 

If the hardware failure happen then the data in that OSD lost. So some parts of some files are lost . How can i find what are the files which got currupted. 

Regards
Surya Balan
 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux