HowTo CephgFS recovery tools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello CephFS / Ceph gurus...

I am currently using cephfs to store data in ceph object storage cluster.

Cephfs is using separate pools for data and metadata.

I am trying to understand how to recover cephfs in a situation where:
1. The cluster looses more OSDs than the number of configured replicas.
2. There is loss of all objects for specific PGs in the data pool.
3. There is loss of all objects for specific PGs in the metadata pool.
4./ The cluster as been recovered mostly by deleting the problematic OSDs from the crush map, and by recreating staled PGs.
5./ MDS has been restarted and Cephfs remounted
My questions are:

a./ If the MDS journal can be replayed, it may be able to recreate some of the loss metadata, if it still maintains relevant information in log. Is this correct?

b./ If all the metadata for given files is loss, but the files themselves have all their objects intact, would we be able to mount cephfs? If yes, how would those files appear in the filesystem? With a '??? ??? ???'  for the attributes? And in the same location as before?

c./ In the situation reported in b./, what would be the proper steps to start injecting metadata for the orphan files? Please correct me if I am wrong, but I am assuming

    - cephfs-table-tool 0 reset session
    - cephfs-table-tool 0 reset snap
    - cephfs-table-tool 0 reset inode
    - cephfs-journal-tool --rank=0 journal reset
    - cephfs-data-scan init
    - cephfs-data-scan scan_extents <data pool>
    - cephfs-data-scan scan_inodes <data pool>

d./ In step c./ what is the exact difference between 'cephfs-table-tool 0 reset session', 'cephfs-table-tool 0 reset snap' and 'cephfs-table-tool 0 reset inode'? Are there situations where we do not want to use the 3 cephfs-table-tool reset commands but just one or two?

e./ In what circumstances we would do a reset of the filesystem with 'ceph fs reset cephfs --yes-i-really-mean-it'?

Thank you in Advance
Cheers
-- 
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux