Re: Ceph MDS remove

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry,
forgot to mention that I'm running Ceph 0.87 on Centos 7.

On 24/02/2015 10:20, Xavier Villaneau wrote:
Hello,

I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.

Le 24/02/2015 09:58, ceph-users a écrit :
Hi all,

I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible

I've configured in my hosts list
[mdss]
hostname1
hostname2
....

I now need to remove this MDS from the cluster.
The only document I found is this:
http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/


# service ceph -a stop mds
=== mds.z-srv-m-cph02 ===
Stopping Ceph mds.z-srv-m-cph02 on z-srv-m-cph02...done
=== mds.r-srv-m-cph02 ===
Stopping Ceph mds.r-srv-m-cph02 on r-srv-m-cph02...done
=== mds.r-srv-m-cph01 ===
Stopping Ceph mds.r-srv-m-cph01 on r-srv-m-cph01...done
=== mds.0 ===
Stopping Ceph mds.0 on zrh-srv-m-cph01...done
=== mds.192.168.0.1 ===
Stopping Ceph mds.192.168.0.1 on z-srv-m-cph01...done
=== mds.z-srv-m-cph01 ===
Stopping Ceph mds.z-srv-m-cph01 on z-srv-m-cph01...done

[root@z-srv-m-cph01 ceph]# ceph mds stat
e1: 0/0/0 up

1. question: why the MDS are not stopped?

I also had trouble stopping my MDS. They would start up again even if
I killed the processes… I suggest you try :
sudo stop ceph-mds-all

2. When I try to remove them:

# ceph mds rm mds.z-srv-m-cph01 z-srv-m-cph01
Invalid command: mds.z-srv-m-cph01 doesn't represent an int
mds rm <int[0-]> <name (type.id)> : remove nonactive mds
Error EINVAL: invalid command

In the mds rm command, the <int[0-]> refers to the ID of the metadata
pool used by CephFS (since there can only be one right now). And the
<name (type.id)> is simply mds.n where n is 0, 1, etc. Maybe there are
other possible values for type.id, but it worked for me.

The ansible playbook created me a conf like this in ceph.conf:
[mds]

[mds.z-srv-m-cph01]
host = z-srv-m-cph01

I believe you'll also need to delete the [msd] section in ceph.conf,
but since I do not know much about ansible I can't give you more
advice on this.

Finally, as described on the blog post you linked, you need to reset
cephfs after (or the health will be complaining) :
ceph mds newfs <metadata_pool_ID> <data_pool_ID> --yes-i-really-mean-it

Regards,
--
Xavier

Can someone please help on this or at least give some hints?

Thank you very much
Gian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux