Hi Wido.
Thanks again.
I will rebuild the cluster with bigger disk.
Again thanks for your help.
2015-07-13 14:15 GMT+02:00 Wido den Hollander <wido@xxxxxxxx>:
On 13-07-15 14:07, alberto ayllon wrote:
> On 13-07-15 13:12, alberto ayllon wrote:
>> Maybe this can help to get the origin of the problem.
>>
>> If I run ceph pg dump, and the end of the response i get:
>>
>
> What does 'ceph osd tree' tell you?
>
> It seems there is something wrong with your CRUSHMap.
>
> Wido
>
>
> Thanks for your answer Wido.
>
> Here is the output of ceph osd tree;
>
> # ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 0 root default
> -2 0 host ceph01
> 0 0 osd.0 up 1.00000 1.00000
> 3 0 osd.3 up 1.00000 1.00000
> 6 0 osd.6 up 1.00000 1.00000
> -3 0 host ceph02
> 1 0 osd.1 up 1.00000 1.00000
> 4 0 osd.4 up 1.00000 1.00000
> 7 0 osd.7 up 1.00000 1.00000
> -4 0 host ceph03
> 2 0 osd.2 up 1.00000 1.00000
> 5 0 osd.5 up 1.00000 1.00000
> 8 0 osd.8 up 1.00000 1.00000
>
>
The weights are allo zero (0) of all the OSDs. How big are the disks? I
think they are very tiny , eg <10GB?
You probably want a bit bigger disks to test with.
Or set the weight manually of each OSD:
$ ceph osd crush reweight osd.X 1
Wido
>>
>> osdstatkbusedkbavailkbhb inhb out
>> 03668851949085231596[1,2,3,4,5,6,7,8][]
>> 13400451975925231596[][]
>> 23400451975925231596[1][]
>> 33400451975925231596[0,1,2,4,5,6,7,8][]
>> 43400451975925231596[1,2][]
>> 53400451975925231596[1,2,4][]
>> 63400451975925231596[0,1,2,3,4,5,7,8][]
>> 73400451975925231596[1,2,4,5][]
>> 83400451975925231596[1,2,4,5,7][]
>> sum3087204677564447084364
>>
>>
>> Please someone can help me?
>>
>>
>>
>> 2015-07-13 11:45 GMT+02:00 alberto ayllon <albertoayllonces at
> gmail.com <http://gmail.com>
>> <mailto:albertoayllonces <mailto:albertoayllonces> at gmail.com
> <http://gmail.com>>>:
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>>
>> Hello everybody and thanks foryour help.
>>
>> Hello, I'm newbie in CEPH, I'm trying to install a CEPHcluster with
>> test purpose.
>>
>> I had just installed a CEPH cluster with three VMs (ubuntu 14.04),
>> each one has one mon daemon and three OSDs, also each server has 3
> disk.
>> Cluster has only one poll (rbd) with pg and pgp_num = 280, and "osd
>> pool get rbd size = 2".
>>
>> I made cluster's installation with ceph-deploy, ceph version is
>> "0.94.2"
>>
>> I think cluster's OSDs are having peering problems, because if Irun
>> ceph status, it returns:
>>
>> # ceph status
>> cluster d54a2216-b522-4744-a7cc-a2106e1281b6
>> health HEALTH_WARN
>> 280 pgs degraded
>> 280 pgs stuck degraded
>> 280 pgs stuck unclean
>> 280 pgs stuck undersized
>> 280 pgs undersized
>> monmap e3: 3 mons at
>>
> {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0
> <http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0>
>>
> <http://172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0>}
>> election epoch 38, quorum 0,1,2 ceph01,ceph02,ceph03
>> osdmap e46: 9 osds: 9 up, 9 in
>> pgmap v129: 280 pgs, 1 pools, 0 bytes data, 0 objects
>> 301 MB used, 45679 MB / 45980 MB avail
>> 280 active+undersized+degraded
>>
>> And for all pgs, the command "ceph pg map X.yy"returns something like:
>>
>> osdmap e46 pg 0.d7 (0.d7) -> up [0] acting [0]
>>
>> As I know "Acting Set" and "Up Set" must have the same value, but as
>> they are equal to 0, there are not defined OSDs to
>> stores pgs replicas, and I think this is why all pg are in
>> "active+undersized+degraded" state.
>>
>> Has anyone any idea of what I have to do for "Active Set" and "Up
>> Set" reaches correct values.
>>
>>
>> Thanks a lot!
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com <http://lists.ceph.com>
>>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com