发送时间: 2016年7月26日 6:05:05
收件人: 朱 彤
抄送: ceph-users@xxxxxxxxxxxxxx
主题: Re: 答复: 答复: 答复: how to transfer ceph cluster from the old network-and-hosts to a new one
Besides, systemctl status ceph-osd@3 and systemctl status ceph-osd@4 give the same result: Active: inactive (dead)
С уважением,
Дробышевский Владимир
Компания "АйТи Город"
+7 343 2222192
ICQ# - 1921011
Аппаратное и программное обеспечение
IBM, Microsoft, Eset, Яндекс
Поставка проектов "под ключ"
Аутсорсинг ИТ-услуг
Now the service could be found, thanks. However, both ceph osd tree and ceph status show osd.3 is still up, although ceph status also shows "degraded, stuck unclean..." I think is because of clock skew on a second MON.
Besides, systemctl status ceph-osd@3 and systemctl status ceph-osd@4 give the same result: Active: inactive (dead)
[root@ceph-node1 ~]# systemctl stop ceph-osd@3[root@ceph-node1 ~]# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.04408 root default-2 0.01469 host ceph-node10 0.00490 osd.0 up 1.00000 1.000001 0.00490 osd.1 up 1.00000 1.000002 0.00490 osd.2 up 1.00000 1.00000-3 0.01469 host ceph-node33 0.00490 osd.3 up 0 1.000004 0.00490 osd.4 up 0 1.000008 0.00490 osd.8 up 0 1.00000-4 0.01469 host ceph-node25 0.00490 osd.5 up 1.00000 1.000006 0.00490 osd.6 up 1.00000 1.000007 0.00490 osd.7 up 1.00000 1.00000[root@ceph-node1 ~]# ceph statuscluster eee6caf2-a7c6-411c-8711-a87aa4a66bf2health HEALTH_WARNclock skew detected on mon.ceph-node348 pgs degraded112 pgs stuck unclean48 pgs undersizedrecovery 342/513 objects degraded (66.667%)Monitor clock skew detectedmonmap e2: 2 mons at {ceph-node1=192.168.57.101:6789/0,ceph-node3=192.168.57.103:6789/0}election epoch 44, quorum 0,1 ceph-node1,ceph-node3osdmap e99: 9 osds: 9 up, 6 in; 64 remapped pgsflags sortbitwisepgmap v45477: 112 pgs, 7 pools, 1636 bytes data, 171 objects224 MB used, 30429 MB / 30653 MB avail342/513 objects degraded (66.667%)53 active+remapped48 active+undersized+degraded11 active[root@ceph-node1 ~]# systemctl status ceph-osd@3● ceph-osd@3.service - Ceph object storage daemonLoaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)Active: inactive (dead)
发件人: vlad@xxxxxxxxxx <vlad@xxxxxxxxxx> 代表 Владимир Дробышевский <v.heathen@xxxxxxxxx>
发送时间: 2016年7月26日 4:47:14
收件人: 朱 彤
抄送: ceph-users@xxxxxxxxxxxxxx
主题: Re: 答复: 答复: how to transfer ceph cluster from the old network-and-hosts to a new oneHi!
You should use ceph-osd@<num> as a service name, not ceph, and systemctl as a service control utility.
For example, 'systemctl stop ceph-osd@3'
Best regards,Vladimir
С уважением,
Дробышевский Владимир
Компания "АйТи Город"
+7 343 2222192
ICQ# - 1921011
Аппаратное и программное обеспечение
IBM, Microsoft, Eset, Яндекс
Поставка проектов "под ключ"
Аутсорсинг ИТ-услуг
2016-07-26 7:11 GMT+05:00 朱 彤 <besthopeall@xxxxxxxxxxx>:
@Дробышевский thanks, I have tried, but
# service ceph statusRedirecting to /bin/systemctl status ceph.service● ceph.serviceLoaded: not-found (Reason: No such file or directory)Active: inactive (dead)
compared to another one having /etc/init.d/ceph and CentOS7 as well.$ service ceph status=== osd.6 ===osd.6: running failed: '/usr/bin/ceph --admin-daemon /var/run/ceph/ceph-osd.6.asok version 2>/dev/null'They have the same OS, but one has /etc/init.d/ceph, the other one doesn't. Probably I skipped some steps by mistake.
Since /etc/init.d/ceph is a script, can I just copy one to have it work?
发件人: Дробышевский, Владимир <vlad@xxxxxxxxxx>
发送时间: 2016年7月25日 12:43:54
收件人: 朱 彤
抄送: ceph-users@xxxxxxxxxxxxxx
主题: Re: 答复: how to transfer ceph cluster from the old network-and-hosts to a new oneHi!
CentOS 7 is using systemd, so you should stop osd with 'systemctl stop ceph-osd@<num>'
Best regards,Vladimir
С уважением,
Дробышевский Владимир
Компания "АйТи Город"
+7 343 2222192
Аппаратное и программное обеспечение
IBM, Microsoft, Eset
Поставка проектов "под ключ"
Аутсорсинг ИТ-услуг
2016-07-25 13:44 GMT+05:00 朱 彤 <besthopeall@xxxxxxxxxxx>:
@Henrik Korkuc thanks for the tip, operating on it. In order to stop OSD, i need to run /etc/init.d/ceph stop osd.num, but just noticed /etc/init.d/ceph is missing. No such directory or file. I used ceph-deploy to install the cluster on Centos 7. Any idea?
Thanks!
发件人: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> 代表 Henrik Korkuc <lists@xxxxxxxxx>
发送时间: 2016年7月25日 8:03:34
收件人: ceph-users@xxxxxxxxxxxxxx
主题: Re: how to transfer ceph cluster from the old network-and-hosts to a new oneOn 16-07-25 10:55, 朱 彤 wrote:
Hey,Hi all,
I m looking for a method to transfer ceph cluster.
Now the cluster is located in network1 that has hosts A, B, C...
And the target is to transfer it to network2 that has hosts a,b,c...
What I can think of, is adding hosts a, b, c into the current cluster like adding OSD and MON. Then after the data has been rebalanced, down OSD and MON on hosts A,B,C
Then the question would be how to know the old OSD could be safely down?
This method causes too much redundant operations, other than creating OSD or MON, in the new environment, should I also create PGs and POOLs just like the old cluster has?
Is there a more direct way to shift cluster from old network and hosts to a new one?
please refer to recent post named "change of dns names and IP addresses of cluster members" in this mailing list. If both networks are interconnected then migration would be quite easy
Thanks!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com