Re: All pgs with -> up [0] acting [0], new cluster installation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 13-07-15 13:12, alberto ayllon wrote:
> Maybe this can help to get the origin of the problem.
> 
> If I run  ceph pg dump, and the end of the response i get:
> 

What does 'ceph osd tree' tell you?

It seems there is something wrong with your CRUSHMap.

Wido

> 
> osdstatkbusedkbavailkbhb inhb out
> 03668851949085231596[1,2,3,4,5,6,7,8][]
> 13400451975925231596[][]
> 23400451975925231596[1][]
> 33400451975925231596[0,1,2,4,5,6,7,8][]
> 43400451975925231596[1,2][]
> 53400451975925231596[1,2,4][]
> 63400451975925231596[0,1,2,3,4,5,7,8][]
> 73400451975925231596[1,2,4,5][]
> 83400451975925231596[1,2,4,5,7][]
>  sum3087204677564447084364
> 
> 
> Please someone can help me?
> 
> 
> 
> 2015-07-13 11:45 GMT+02:00 alberto ayllon <albertoayllonces@xxxxxxxxx
> <mailto:albertoayllonces@xxxxxxxxx>>:
> 
>     Hello everybody and thanks foryour help.
> 
>     Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
>     test purpose.
> 
>     I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
>     each one has one mon daemon and three OSDs, also each server has 3 disk.
>     Cluster has only one poll (rbd) with pg and pgp_num = 280, and "osd
>     pool get rbd size = 2".
> 
>     I made cluster's installation with  ceph-deploy, ceph version is
>     "0.94.2"
> 
>     I think cluster's OSDs are having peering problems, because if Irun
>     ceph status, it returns:
> 
>     # ceph status
>         cluster d54a2216-b522-4744-a7cc-a2106e1281b6
>          health HEALTH_WARN
>                 280 pgs degraded
>                 280 pgs stuck degraded
>                 280 pgs stuck unclean
>                 280 pgs stuck undersized
>                 280 pgs undersized
>          monmap e3: 3 mons at
>     {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
>     <http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0>}
>                 election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
>          osdmap e46: 9 osds: 9 up, 9 in
>           pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
>                 301 MB used, 45679 MB / 45980 MB avail
>                      280 active+undersized+degraded
> 
>     And for all pgs, the command "ceph pg map X.yy"returns something like:
> 
>     osdmap e46 pg 0.d7 (0.d7) -> up [0] acting [0]
> 
>     As I know "Acting Set" and "Up Set" must have the same value, but as
>     they are equal to 0, there are not defined OSDs to
>     stores pgs replicas, and I think this is why all pg are in
>     "active+undersized+degraded" state.
> 
>     Has anyone any idea of what I have to do for  "Active Set" and "Up
>     Set" reaches correct values.
> 
> 
>     Thanks a lot!
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux