Hello everybody and thanks for your help.
Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test purpose.
I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each one has one mon daemon and three OSDs, also each server has 3 disk.
Cluster has only one poll (rbd) with pg and pgp_num = 280, and "osd pool get rbd size = 2".
I made cluster's installation with ceph-deploy, ceph version is "0.94.2"
I think cluster's OSDs are having peering problems, because if I run ceph status, it returns:
# ceph status
cluster d54a2216-b522-4744-a7cc-a2106e1281b6
health HEALTH_WARN
280 pgs degraded
280 pgs stuck degraded
280 pgs stuck unclean
280 pgs stuck undersized
280 pgs undersized
monmap e3: 3 mons at {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
osdmap e46: 9 osds: 9 up, 9 in
pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
301 MB used, 45679 MB / 45980 MB avail
280 active+undersized+degraded
And for all pgs, the command "ceph pg map X.yy" returns something like:
osdmap e46 pg 0.d7 (0.d7) -> up [0] acting [0]
As I know "Acting Set" and "Up Set" must have the same value, but as they are equal to 0, there are not defined OSDs to
stores pgs replicas, and I think this is why all pg are in "active+undersized+degraded" state.
Has anyone any idea of what I have to do for "Active Set" and "Up Set" reaches correct values.
Thanks a lot!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com