1 mons down, ceph-create-keys

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello
Could you help me please 

ceph status
    cluster 4da1f6d8-ca10-4bfa-bff7-c3c1cdb3f888
     health HEALTH_WARN 229 pgs peering; 102 pgs stuck inactive; 236 pgs stuck unclean; 1 mons down, quorum 0,1 st1,st2
     monmap e3: 3 mons at {st1=109.233.57.226:6789/0,st2=91.224.140.229:6789/0,st3=176.9.250.166:6789/0}, election epoch 72432, quorum 0,1 st1,st2
     osdmap e714: 3 osds: 3 up, 3 in
      pgmap v1824: 292 pgs, 4 pools, 135 bytes data, 2 objects
            137 MB used, 284 GB / 284 GB avail
                   7 active
                  56 active+clean
                 188 peering
                  41 remapped+peering

I try to restart st3 monitor 
 service ceph -a restart mon.st3

ps aux | grep ceph
root      9642  1.7 19.8 785988 202260 ?       S<sl 12:16   0:11 /usr/bin/ceph-osd -i 2 --pid-file /var/run/ceph/osd.2.pid -c /etc/ceph/ceph.conf
root     21375  5.0  3.5 212996 35852 pts/0    Sl   12:27   0:00 /usr/bin/ceph-mon -i st3 --pid-file /var/run/ceph/mon.st3.pid -c /etc/ceph/ceph.conf
root     21393  0.5  0.5  51308  6060 pts/0    S    12:27   0:00 python /usr/sbin/ceph-create-keys -i st3

Process ceph-create-keys - stuck and have never finished.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux