add mon and move mon

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear friends:

 

       Hello,I have a small problem When I use ceph . my ceph has three monitor. I want to  move out one.

root@node01 ~]# ceph -s

    cluster b0d8bd0d-6269-4ce7-a10b-9adc7ee2c4c8

     health HEALTH_WARN

            too many PGs per OSD (682 > max 300)

     monmap e23: 3 mons at

{node01=172.168.2.185:6789/0,node02=172.168.2.186:6789/0,node03=172.168.2.187:6789/0}

            election epoch 472, quorum 0,1,2 node01,node02,node03

     osdmap e7084: 18 osds: 18 up, 18 in

      pgmap v1051011: 4448 pgs, 15 pools, 7915 MB data, 12834 objects

            27537 MB used, 23298 GB / 23325 GB avail

                4448 active+clean

 

 

So I do as this :

#ceph-deploy mon destroy node03

 

Then  I add it in the cluster again.

 

#ceph-deploy mon add node03

 

The node03 is added to the cluster.but after a whilethe monitor  is down .

When I see the /var/log/messages

I find that

 

Apr 19 11:12:01 node01 systemd: Starting Session 14091 of user root.

Apr 19 11:12:01 node01 systemd: Started Session 14091 of user root.

Apr 19 11:12:39 node01 bash: 2016-04-19 11:12:39.533817 7f6e51ec2700 -1 mon.node01@0(leader) e23 *** Got Signal Terminated ***

When I start up the monitor ,then after a while it becomes down again.

But I have enough system space.

[root@node03 ~]# df -TH

Filesystem            Type      Size  Used Avail Use% Mounted on

/dev/mapper/rhel-root xfs        11G  4.7G  6.1G  44% /

devtmpfs              devtmpfs   26G     0   26G   0% /dev

tmpfs                 tmpfs      26G   82k   26G   1% /dev/shm

tmpfs                 tmpfs      26G  147M   26G   1% /run

tmpfs                 tmpfs      26G     0   26G   0% /sys/fs/cgroup

/dev/mapper/rhel-usr  xfs        11G  4.1G  6.7G  38% /usr

/dev/mapper/rhel-tmp  xfs        11G   34M   11G   1% /tmp

/dev/mapper/rhel-home xfs        11G   34M   11G   1% /home

/dev/mapper/rhel-var  xfs        11G  1.6G  9.2G  15% /var

/dev/sde1             xfs       2.0T  152M  2.0T   1% /var/lib/ceph/osd/ceph-15

/dev/sdg1             xfs       2.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-17

/dev/sdd1             xfs       2.0T  165M  2.0T   1% /var/lib/ceph/osd/ceph-14

/dev/sda1             xfs       521M  131M  391M  26% /boot

/dev/sdb1             xfs       219G  989M  218G   1% /var/lib/ceph/osd/ceph-4

/dev/sdf1             xfs       2.0T  4.6G  2.0T   1% /var/lib/ceph/osd/ceph-16

/dev/sdc1             xfs       219G  129M  219G   1% /var/lib/ceph/osd/ceph-5

You have new mail in /var/spool/mail/root

[root@node03 ~]#

 

What’s the problem , is my operation wrong?

 

Looking forward to your reply.

 

 

                                                                                                  --Dingxf48

 

 

发送自 Windows 10 邮件应用

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux