Re: implications of losing the MDS map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 8, 2017 at 1:51 AM, Daniel K <sathackr@xxxxxxxxx> wrote:
> I finally figured out how to get the ceph-monstore-tool (compiled from
> source) and am ready to attemp to recover my cluster.
>
> I have one question -- in the instructions,
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
> under Recovery from OSDs, Known limitations:
>
> ->
>
> MDS Maps: the MDS maps are lost.
>
>
> What are the implications of this? Do I just need to rebuild this, or is
> there a data loss component to it? -- Is my data stored in CephFS still
> safe?

It depends.  If you just had a single active MDS, then you can
probably get back to a working state by just doing an "fs new"
pointing at your existing pools, followed by an "fs reset" to make it
skip the "creating" phase.  Make sure you do not have any MDS daemons
running until after you have done the fs reset.

If you had multiple active MDS daemons, then you would need to use the
disaster recovery tools to try and salvage their metadata before
resetting the mds map.

John

>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux