Hello,
I have a ceph cluster (nautilus 14.2.8) with 2 filesystems and 3 mds.
mds1 is managing fs1
mds2 manages fs2
mds3 is standby
I want to completely remove fs1.
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd pool delete
but in many threads I noticed that you must shutdown the mds before
running ceph fs rm.
Is it still the case ?
What happens in my configuration (I have 2 fs) ? If I stop mds1, the
mds3 will take the management. If I stop mds3 what will mds2 do (try to
manage the 2 fs or continue only with fs2) ?
Thanks for your advices.
F.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx