Re: PG stuck degraded, undersized, unclean

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 18, 2015 at 9:09 PM, Brian Rak <brak@xxxxxxxxxxxxxxx> wrote:
>> What does your crushmap look like (ceph osd getcrushmap -o
>> /tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
>> prevent Ceph from selecting an OSD for the third replica?
>>
>> Cheers,
>> Florian
>
>
> I have 5 hosts, and it's configured like this:

That's not the full crushmap, so I'm a bit reduced to guessing...

> root default {
>         id -1           # do not change unnecessarily
>         # weight 204.979
>         alg straw
>         hash 0  # rjenkins1
>         item osd01 weight 12.670
>         item osd02 weight 14.480
>         item osd03 weight 14.480
>         item osd04 weight 79.860
>         item osd05 weight 83.490

Whence the large weight difference? Are osd04 and osd05 really that
much bigger in disk space?

> rule replicated_ruleset {
>         ruleset 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step chooseleaf firstn 0 type host
>         step emit
> }
>
> This should not be preventing the assignment (AFAIK).  Currently the PG is
> on osd01 and osd05.

Just checking, sure you're not running short on space (close to 90%
utilization) on one of your OSD filesystems?

Cheers,
Florian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux