Re: ceph-users Digest, Vol 26, Issue 20

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Linke,
 
Thanks a lot.
 
I modified the weight os each device and the status returns HEALTH_OK.
 
But I don't know why all the weight is 0.000.
 
I installed the ceph through ceph-deploy.
 
Maybe the disk size of each OSD is very low, just 10G.
 
Regards,
Guanghua 
 
 

> Message: 29
> Date: Fri, 20 Mar 2015 10:19:07 +0100
> From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: [ceph-users] 'pgs stuck unclean ' problem
> Message-ID: <550BE60B.5010000@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> Hi,
>
>
> On 03/20/2015 01:58 AM, houguanghua wrote:
> > Dear all,
> > Ceph 0.72.2 is deployed in three hosts. But the ceph's status is
> > HEALTH_WARN . The status is as follows:
> >
> > # ceph -s
> > cluster e25909ed-25d9-42fd-8c97-0ed31eec6194
> > health HEALTH_WARN 768 pgs degraded; 768 pgs stuck unclean;
> > recovery 2/3 objects degraded (66.667%)
> > monmap e3: 3 mons at
> > {ceph-node1=192.168.57.101:6789/0,ceph-node2=192.168.57.102:6789/0,ceph-node3=192.168.57.103:6789/0},
> > election epoch 34, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
> > osdmap e170: 9 osds: 9 up, 9 in
> > pgmap v1741: 768 pgs, 7 pools, 36 bytes data, 1 objects
> > 367 MB used, 45612 MB / 45980 MB avail
> > 2/3 objects degraded (66.667%)
> > 768 active+degraded
> >
>
> *snipsnap*
>
> > Other info is depicted here.
> >
> > # ceph osd tree
> > # id weight type name up/down reweight
> > -1 0 root default
> > -7 0 rack rack03
> > -4 0 host ceph-node3
> > 6 0 osd.6 up 1
> > 7 0 osd.7 up 1
> > 8 0 osd.8 up 1
> > -6 0 rack rack02
> > -3 0 host ceph-node2
> > 3 0 osd.3 up 1
> > 4 0 osd.4 up 1
> > 5 0 osd.5 up 1
> > -5 0 rack rack01
> > -2 0 host ceph-node1
> > 0 0 osd.0 up 1
> > 1 0 osd.1 up 1
> > 2 0 osd.2 up 1
> >
> The weights for all OSD devices are 0. As a result all OSDs are
> considered unusable for Ceph and not considered for storing objects on them.
>
> This problem usually occurs in test setups with very small OSDs devices.
> If this is the case in your setup, you can adjust the weight of the OSDs
> or use larger devices. If your devices should have a sufficient size,
> you need to check why the weights of the OSDs are not adjusted accordingly.
>
> Best regards,
> Burkhard
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20150320/9341155f/attachment-0001.htm>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux