Re: Ceph Filesystem recovery with intact pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alright, I didn't realize that the MDS was affected by this as well.
In that case there's probably no other way than running the 'ceph fs new ...' command as Yan, Zheng suggested. Do you have backups of your cephfs contents in case that goes wrong? I'm not sure if a pool copy would help in any way here, also I haven't recreated a cephfs from existing pools yet, maybe someone else can provide some more details about the risks of doing that, I understand your hesitation though.

Regards,
Eugen


Zitat von Cyclic 3 <cyclic3.git@xxxxxxxxx>:

Both the MDS maps and the keyrings are lost as a side effect of the monitor
recovery process I mentioned in my initial email, detailed here
https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
.

On Mon, 31 Aug 2020 at 21:10, Eugen Block <eblock@xxxxxx> wrote:

I don’t understand, what happened to the previous MDS? If there are
cephfs pools there also was an old MDS, right? Can you explain that
please?


Zitat von cyclic3.git@xxxxxxxxx:

> I added an MDS, but there was no change in either output (apart from
> recognising the existence of an MDS)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx






_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux