Not able to achieve active+clean state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can change the pools configuration using for example one of this two
options:

 1. define a different crush_ruleset for the pools to allow replicas to be
placed on the same host (so 3 replicas can be accommodated on two hosts).

 2. set the number of replicas to two

Option 2 is very easy:

$ ceph osd pool set <pool_name> size 2

For option 1, please try to have a look here:
http://blog.zhaw.ch/icclab/deploy-ceph-troubleshooting-part-23/ (case 2 of
" Check that replication requirements can be met" paragraph)


2014-07-18 16:55 GMT+02:00 Joe Hewitt <joe.z.hewitt at gmail.com>:

> Agree with Iban,, adding one more OSD may help.
> I met the same problem when I created cluster following quick start guide.
> 2 OSD nodes gave me health_warn and then adding one more OSD made
> everything okay.
>
>
>
> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf Of
> Iban Cabrillo
> Sent: Friday, July 18, 2014 10:43 PM
> To: Pratik Rupala
> Cc: ceph-users at lists.ceph.com
> Subject: Re: [ceph-users] Not able to achieve active+clean state
>
> Hi Pratik,
>   I am not an expert, but I think you need one more OSD server, the
> default pools (rbd, metadata, data) have 3 replicas by default.
> Regards, I
> El 18/07/2014 14:19, "Pratik Rupala" <pratik.rupala at calsoftinc.com>
> escribi?:
> Hi,
>
> I am deploying firefly version on CentOs 6.4. I am following quick
> installation instructions available at ceph.com.
> Kernel version in CentOs 6.4 is 2.6.32-358.
>
> I am using virtual machines for all the nodes. As per the setup, there are
> one admin-node, one monitor node and two OSD nodes.
> I have added four OSDs created from four scsi disks of 10 GB on both OSD
> nodes, instead of just creating directory on OSD nodes as shown in that
> quick installation section.
>
> At the end when I run  ceph health command, the website says it should
> achieve active+clean state. But my cluster is not able to achieve it as
> shown below:
>
> [ceph at node1 ~]$ ceph health
> HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck
> unclean
> [ceph at node1 ~]$ ceph status
>     cluster 08c77eb5-4fa9-4d4a-938c-af812137cb2c
>      health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192
> pgs stuck unclean
>      monmap e1: 1 mons at {node1=172.17.35.17:6789/0}, election epoch 1,
> quorum 0 node1
>      osdmap e36: 8 osds: 8 up, 8 in
>       pgmap v95: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             271 MB used, 40600 MB / 40871 MB avail
>                  192 creating+incomplete
> [ceph at node1 ~]$
>
>
> [ceph at node1 ~]$ ceph osd tree
> # id    weight  type name       up/down reweight
> -1      0       root default
> -2      0               host node2
> 0       0                       osd.0   up      1
> 1       0                       osd.1   up      1
> 2       0                       osd.2   up      1
> 3       0                       osd.3   up      1
> -3      0               host node3
> 4       0                       osd.4   up      1
> 5       0                       osd.5   up      1
> 6       0                       osd.6   up      1
> 7       0                       osd.7   up      1
> [ceph at node1 ~]$
>
>
> Please let me know if I am missing anything. Do I still need to do to
> bring my Ceph cluster in HEALTH OK state.
>
> Regards,
> Pratik Rupala
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Vincenzo Pii
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
http://www.cloudcomp.ch/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140718/c5f0f518/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux