Re: Data recovery after loosing all monitors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Bryan Henderson <bryanh@xxxxxxxxxxxxxxxx> 于 2018年6月2日周六 10:23写道:
>Luckily; it's not. I don't remember if the MDS maps contain entirely
>ephemeral data, but on the scale of cephfs recovery scenarios that's just
>about the easiest one. Somebody would have to walk through it; you probably
>need to look up the table states and mds counts from the RADOS store and
>generate a new (epoch 1 or 2) mdsmap which contains those settings ready to
>go. Or maybe you just need to "create" a new cephfs on the prior pools and
>set it up with the correct number of MDSes.
>
>At the moment the mostly-documented recovery procedure probably involves
>recovering the journals, flushing everything out, and resetting the server
>state to a single MDS, and if you lose all your monitors there's a good
>chance you need to be going through recovery anyway, so...*shrug*

The idea of just creating a new filesystem from old metadata and data pools
intrigued me, so I looked into it further, including reading some code.

It appears that there's nothing in the MDS map that can't be regenerated, and
while it's probably easy for a Ceph developer to do that, there aren't tools
available that can.

'fs new' comes close, but according to

  http://docs.ceph.com/docs/master/cephfs/disaster-recovery/

it causes a new empty root directory to be created, so you lose access to all
your files (and leak all the storage space they occupy)

Kill all mds first , create new fs with old pools , then run ‘fs reset’ before start any MDS.



The same document mentions 'fs reset', which also comes close and keeps the
existing root directory, but it requires, perhaps gratuitously, that a
filesystem already exist in the MDS map, albeit maybe corrupted, before it
regenerates it.

I'm tempted to modify Ceph to try to add a 'fs recreate' that does what 'fs
reset' does, but without expecting anything to be there already.  Maybe that's
all it takes along with 'ceph-objecstore-tool --op update-mon-db' to recover
from a lost cluster map.

--
Bryan Henderson                                   San Jose, California
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux