On 2/18/2015 3:01 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 7:53 PM, Brian Rak <brak@xxxxxxxxxxxxxxx> wrote:
We're running ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578),
and seeing this:
HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1 pgs
stuck undersized; 1 pgs undersized
pg 4.2af is stuck unclean for 77192.522960, current state
active+undersized+degraded, last acting [50,42]
pg 4.2af is stuck undersized for 980.617479, current state
active+undersized+degraded, last acting [50,42]
pg 4.2af is stuck degraded for 980.617902, current state
active+undersized+degraded, last acting [50,42]
pg 4.2af is active+undersized+degraded, acting [50,42]
However, ceph pg query doesn't really show any issues:
https://gist.githubusercontent.com/devicenull/9d911362e4de83c02e40/raw/565fe18163e261c8105e5493a4e90cc3c461ed9d/gistfile1.txt
(too long to post here)
I've also tried:
# ceph pg 4.2af mark_unfound_lost revert
pg has no unfound objects
How can I get Ceph to rebuild here? The replica count is 3, but I can't
seem to figure out what's going on here. Enabling various debug logs
doesn't reveal anything obvious to me.
I've tried restarting both OSDs, which did nothing.
What does your crushmap look like (ceph osd getcrushmap -o
/tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
prevent Ceph from selecting an OSD for the third replica?
Cheers,
Florian
I have 5 hosts, and it's configured like this:
root default {
id -1 # do not change unnecessarily
# weight 204.979
alg straw
hash 0 # rjenkins1
item osd01 weight 12.670
item osd02 weight 14.480
item osd03 weight 14.480
item osd04 weight 79.860
item osd05 weight 83.490
}
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
This should not be preventing the assignment (AFAIK). Currently the PG
is on osd01 and osd05.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com