Re: Replacing an mds server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, it says Coming soon somewhere? (Thanks... and I found it now at
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/ )

I wrote some instructions and tested them (it was very difficult...
putting together incomplete docs, old mailing list threads, etc. and
tinkering), and couldn't find where to add them to docs, and nobody
would answer me "how do you make a new page in the docs that ends up
actually in the index?" so I never sent any pull request... so now that
I know of an existing page where it says "Coming soon", I'll add it there.

And for you, and anyone it helps:

Here is the procedure for removing ALL mds and deleting the pools:
(***this is not what you want as is***)
(killall and rm -rf on the node that has the mds running that you want
to remove, other steps on admin node.. possibly same machine like on my
test cluster)
>     killall ceph-mds
>     ceph mds cluster_down
>    
>     # this seems to actually remove the mds, unlike "ceph mds rm ..."
>     # as badly explained here
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-January/045649.html
>     ceph mds fail 0
>     
>     # this says "Error EINVAL: all MDS daemons must be inactive before
> removing filesystem"
>     ceph fs rm cephfs --yes-i-really-mean-it
>     
>     ceph osd pool delete cephfs_data cephfs_data
> --yes-i-really-really-mean-it
>     ceph osd pool delete cephfs_metadata cephfs_metadata
> --yes-i-really-really-mean-it
>
>     # also auth and the dir
>     rm -rf "/var/lib/ceph/mds/${cluster}-${hostname}"
>     ceph auth del mds."$hostname"

To replace one, I didn't test the procedure... but probably just add a
2nd as a failover, and then just do the removal parts from above:

>     killall ceph-mds  #at this point, the failover happens
>     ceph mds fail 0 #0 is the id here, seen in the dump command below
>     rm -rf "/var/lib/ceph/mds/${cluster}-${hostname}"
>     ceph auth del mds."$hostname"

This command seems not to be required, but making note of it just in case:
> ceph mds rm 0 mds."$hostname"

Check on that with:
> ceph mds dump --format json-pretty

eg.  in this output, I have one mds running "0" and it is in and up.
Make sure to save the output from before, to compare what one mds looks
like, then again with failover set up, then after.

>     "in": [
>         0
>     ],
>     "up": {
>         "mds_0": 2504573
>     },
>     "failed": [],
>     "damaged": [],
>     "stopped": [],
>     "info": {
>         "gid_2504573": {
>             "gid": 2504573,
>             "name": "ceph2",
>             "rank": 0,
>             "incarnation": 99,
>             "state": "up:active",
>             "state_seq": 65,
>             "addr": "10.3.0.132:6818\/3463",
>             "standby_for_rank": -1,
>             "standby_for_fscid": -1,
>             "standby_for_name": "",
>             "standby_replay": false,
>             "export_targets": [],
>             "features": 576460752032874495
>         }
>     },


On 01/24/17 20:56, Jorge Garcia wrote:
> I have been using a ceph-mds server that has low memory. I want to
> replace it with a new system that has a lot more memory. How does one
> go about replacing the ceph-mds server? I looked at the documentation,
> figuring I could remove the current metadata server and add the new
> one, but the remove metadata server section just says "Coming
> soon...". The same page also has a warning about running multiple
> metadata servers. So am I stuck?
>
> Thanks!
>
> Jorge
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx
Internet: http://www.brockmann-consult.de
--------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux