Re: CephFS Recovery/Internals Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 2, 2019 at 12:13 AM Pierre Dittes <pierre.dittes@xxxxxxxxxxx> wrote:
>
> Hi,
> we had some major ****up with our CephFS. Long story short..no Journal backup and journal was truncated.
> Now..I still see a metadata pool with all objects and datapool is fine, from what I know neither was corrupted. Last mount attempt showed a blank FS though.
>
> What are the proper steps now to restore everything to be visible and useable again? I found the documentation very confusing, many things were left unexplained.
>
> Another step that was taken before truncation was the dentries summary command, docs say "stored in the backing store", whatever that means.
>
>     cephfs_metadata               18     136 GiB       4.63M     137 GiB      0.66       6.7 TiB
>     cephfs_data                   19     272 TiB     434.65M     861 TiB     72.04       111 TiB
>
> Any input is helpful

If the expert-only disaster recovery steps are confusing to you, and
yet some of them got run (since your journal was truncated), you're
going to need to be a lot clearer about the story for anyone to be
able to help you.

CephFS metadata is stored in per-directory onode objects within RADOS
(the "backing store"), but in order to aggregate IO and deal with
atomicity and other sorts of things we stream metadata updates into a
per-MDS journal before it goes into the backing store. If you have
some very hot metadata it may be that the backing store is quite stale
and there are a number of updated versions within the journal.

If the journal is gone, some inodes may have been lost if they were
never flushed to begin with. (Although perhaps they were, if you ran
the "recover_dentries summary" option?) To rebuild a working tree
you'll need to do the full backwards scrub with cephfs-data-scan
(https://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/).
-Greg

>
> Thanks
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux