Hi,
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd pool delete
but in many threads I noticed that you must shutdown the mds before
running ceph fs rm.
Is it still the case ?
Yes.
What happens in my configuration (I have 2 fs) ? If I stop mds1, the
mds3 will take the management. If I stop mds3 what will mds2 do (try
to manage the 2 fs or continue only with fs2) ?
First stop the standby mds3 (temporarily no standby mds available),
this shouldn't have any impact on the two active mds, both will only
serve their assigned fs. Then stop mds1 and remove fs1. Ceph will get
into WARN state because of missing standby daemons. Start mds3 to get
back a standby mds, then you can clean up the pools.
Zitat von Francois Legrand <fleg@xxxxxxxxxxxxxx>:
Hello,
I have a ceph cluster (nautilus 14.2.8) with 2 filesystems and 3 mds.
mds1 is managing fs1
mds2 manages fs2
mds3 is standby
I want to completely remove fs1.
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd pool delete
but in many threads I noticed that you must shutdown the mds before
running ceph fs rm.
Is it still the case ?
What happens in my configuration (I have 2 fs) ? If I stop mds1, the
mds3 will take the management. If I stop mds3 what will mds2 do (try
to manage the 2 fs or continue only with fs2) ?
Thanks for your advices.
F.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx