Re: All pgs with -> up [0] acting [0], new cluster installation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maybe this can help to get the origin of the problem.

If I run  ceph pg dump, and the end of the response i get:


osdstat kbused kbavail kb hb in hb out
0 36688 5194908 5231596 [1,2,3,4,5,6,7,8] []
1 34004 5197592 5231596 [] []
2 34004 5197592 5231596 [1] []
3 34004 5197592 5231596 [0,1,2,4,5,6,7,8] []
4 34004 5197592 5231596 [1,2] []
5 34004 5197592 5231596 [1,2,4] []
6 34004 5197592 5231596 [0,1,2,3,4,5,7,8] []
7 34004 5197592 5231596 [1,2,4,5] []
8 34004 5197592 5231596 [1,2,4,5,7] []
 sum 308720 46775644 47084364


Please someone can help me?



2015-07-13 11:45 GMT+02:00 alberto ayllon <albertoayllonces@xxxxxxxxx>:
Hello everybody and thanks for your help.

Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test purpose.

I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each one has one mon daemon and three OSDs, also each server has 3 disk.
Cluster has only one poll (rbd) with pg and pgp_num = 280, and "osd pool get rbd size = 2".

I made cluster's installation with  ceph-deploy, ceph version is "0.94.2"

I think cluster's OSDs are having peering problems, because if I run ceph status, it returns:

# ceph status
    cluster d54a2216-b522-4744-a7cc-a2106e1281b6
     health HEALTH_WARN
            280 pgs degraded
            280 pgs stuck degraded
            280 pgs stuck unclean
            280 pgs stuck undersized
            280 pgs undersized
            election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
     osdmap e46: 9 osds: 9 up, 9 in
      pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
            301 MB used, 45679 MB / 45980 MB avail
                 280 active+undersized+degraded

And for all pgs, the command "ceph pg map X.yy" returns something like:

osdmap e46 pg 0.d7 (0.d7) -> up [0] acting [0]

As I know "Acting Set" and "Up Set" must have the same value, but as they are equal to 0, there are not defined OSDs to
stores pgs replicas, and I think this is why all pg are in "active+undersized+degraded" state.

Has anyone any idea of what I have to do for  "Active Set" and "Up Set" reaches correct values.


Thanks a lot!


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux