Not able to achieve active+clean state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Agree with Iban,, adding one more OSD may help. 
I met the same problem when I created cluster following quick start guide. 2 OSD nodes gave me health_warn and then adding one more OSD made everything okay. 



From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Iban Cabrillo
Sent: Friday, July 18, 2014 10:43 PM
To: Pratik Rupala
Cc: ceph-users at lists.ceph.com
Subject: Re: Not able to achieve active+clean state

Hi Pratik,
  I am not an expert, but I think you need one more OSD server, the default pools (rbd, metadata, data) have 3 replicas by default. 
Regards, I
El 18/07/2014 14:19, "Pratik Rupala" <pratik.rupala at calsoftinc.com> escribi?:
Hi,

I am deploying firefly version on CentOs 6.4. I am following quick installation instructions available at ceph.com.
Kernel version in CentOs 6.4 is 2.6.32-358.

I am using virtual machines for all the nodes. As per the setup, there are one admin-node, one monitor node and two OSD nodes.
I have added four OSDs created from four scsi disks of 10 GB on both OSD nodes, instead of just creating directory on OSD nodes as shown in that quick installation section.

At the end when I run  ceph health command, the website says it should achieve active+clean state. But my cluster is not able to achieve it as shown below:

[ceph at node1 ~]$ ceph health
HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean
[ceph at node1 ~]$ ceph status
    cluster 08c77eb5-4fa9-4d4a-938c-af812137cb2c
     health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean
     monmap e1: 1 mons at {node1=172.17.35.17:6789/0}, election epoch 1, quorum 0 node1
     osdmap e36: 8 osds: 8 up, 8 in
      pgmap v95: 192 pgs, 3 pools, 0 bytes data, 0 objects
            271 MB used, 40600 MB / 40871 MB avail
                 192 creating+incomplete
[ceph at node1 ~]$


[ceph at node1 ~]$ ceph osd tree
# id    weight  type name       up/down reweight
-1      0       root default
-2      0               host node2
0       0                       osd.0   up      1
1       0                       osd.1   up      1
2       0                       osd.2   up      1
3       0                       osd.3   up      1
-3      0               host node3
4       0                       osd.4   up      1
5       0                       osd.5   up      1
6       0                       osd.6   up      1
7       0                       osd.7   up      1
[ceph at node1 ~]$


Please let me know if I am missing anything. Do I still need to do to bring my Ceph cluster in HEALTH OK state.

Regards,
Pratik Rupala
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux