Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
osdstat kbused kbavail kb hb in hb out
0 36688 5194908 5231596 [1,2,3,4,5,6,7,8] []
1 34004 5197592 5231596 [] []
2 34004 5197592 5231596 [1] []
3 34004 5197592 5231596 [0,1,2,4,5,6,7,8] []
4 34004 5197592 5231596 [1,2] []
5 34004 5197592 5231596 [1,2,4] []
6 34004 5197592 5231596 [0,1,2,3,4,5,7,8] []
7 34004 5197592 5231596 [1,2,4,5] []
8 34004 5197592 5231596 [1,2,4,5,7] []
sum 308720 46775644 47084364
Please someone can help me?
2015-07-13 11:45 GMT+02:00 alberto ayllon <albertoayllonces@xxxxxxxxx>:
Hello everybody and thanks for your help.Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test purpose.I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each one has one mon daemon and three OSDs, also each server has 3 disk.Cluster has only one poll (rbd) with pg and pgp_num = 280, and "osd pool get rbd size = 2".I made cluster's installation with ceph-deploy, ceph version is "0.94.2"I think cluster's OSDs are having peering problems, because if I run ceph status, it returns:# ceph statuscluster d54a2216-b522-4744-a7cc-a2106e1281b6health HEALTH_WARN280 pgs degraded280 pgs stuck degraded280 pgs stuck unclean280 pgs stuck undersized280 pgs undersizedmonmap e3: 3 mons at {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03osdmap e46: 9 osds: 9 up, 9 inpgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects301 MB used, 45679 MB / 45980 MB avail280 active+undersized+degradedAnd for all pgs, the command "ceph pg map X.yy" returns something like:osdmap e46 pg 0.d7 (0.d7) -> up [0] acting [0]As I know "Acting Set" and "Up Set" must have the same value, but as they are equal to 0, there are not defined OSDs tostores pgs replicas, and I think this is why all pg are in "active+undersized+degraded" state.Has anyone any idea of what I have to do for "Active Set" and "Up Set" reaches correct values.Thanks a lot!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com