Re: Ceph Filesystem recovery with intact pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 30, 2020 at 8:05 PM <cyclic3.git@xxxxxxxxx> wrote:
>
> Hi,
> I've had a complete monitor failure, which I have recovered from with the steps here: https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
> The data and metadata pools are there and are completely intact, but ceph is reporting that there are no filesystems, where (before the failure) there was one.
>
> Is there any way of putting the filesystem back together again without having to resort to having to rebuild a new metadata pool with cephfs-data-scan?
> I'm on ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
>


'ceph fs new <fs_name> <metadata> <data> [--force]
[--allow-dangerous-metadata-overlay]'

'ceph fs new' command can create fs using existing pools. before
running the command, make sure there is no mds running.  after run the
"fs new "command, run 'ceph fs reset <fs name> --yes-i-really-mean-it'
immediately.




> Thanks,
> Harlan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux