Re: new cluster does not reach active+clean

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jogi!

The reason this is occurring is due to the single osd node setup. By
default Ceph wants to distribute the pgs by host and if only one host
is available crush will not be able to place the replica.

You can add this to your ceph conf to distribute by device rather then node.

osd crush chooseleaf type = 0

This information is also available on the docs :)

http://ceph.com/docs/next/start/quick-ceph-deploy/#create-a-cluster

On Thu, Oct 3, 2013 at 4:16 AM, Jogi Hofmüller <jogi@xxxxxx> wrote:
> Dear all,
>
> Hope I am not on everyones nerves by now ;)
>
> Just started over and created a new cluster:
>
>   one monitor (ceph-mon0)
>   one osd-server (ceph-rd0)
>
> After activating the two OSDs on ceph-rd0 the cluster reaches a state
> active+degraded and never becomes healthy.  Unfortunately this
> particular state is not documented here [1].
>
> Some output:
>
> ceph@ceph-admin:~/cl0$ ceph -w
>   cluster 6f1dfb78-e917-4286-a8f0-2e389d295e43
>    health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
>    monmap e1: 1 mons at {ceph-mon0=192.168.122.56:6789/0}, election
> epoch 2, quorum 0 ceph-mon0
>    osdmap e8: 2 osds: 2 up, 2 in
>     pgmap v15: 192 pgs: 192 active+degraded; 0 bytes data, 69924 KB
> used, 6053 MB / 6121 MB avail
>    mdsmap e1: 0/0/1 up
>
>
> 2013-10-03 13:09:59.997777 osd.0 [INF] pg has no unfound objects
>
> ceph@ceph-admin:~/cl0$ ceph health detail
> HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
> pg 0.3f is stuck unclean since forever, current state active+degraded,
> last acting [0]
> pg 1.3e is stuck unclean since forever, current state active+degraded,
> last acting [0]
> pg 2.3d is stuck unclean since forever, current state active+degraded,
> last acting [0]
> (cut some lines)
> pg 1.0 is active+degraded, acting [0]
> pg 0.1 is active+degraded, acting [0]
> pg 2.2 is active+degraded, acting [0]
> pg 1.1 is active+degraded, acting [0]
> pg 0.0 is active+degraded, acting [0]
>
> Any idea what went wrong here?
>
> [1]  http://eu.ceph.com/docs/wip-3060/ops/manage/failures/osd/
>
> Regards!
> --
> j.hofmüller
>
> Optimism doesn't alter the laws of physics.         - Subcommander T'Pol
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux