On 8/11/20 8:35 AM, Michael Thomas wrote:
On 8/11/20 2:52 AM, Wido den Hollander wrote:
On 11/08/2020 00:40, Michael Thomas wrote:
On my relatively new Octopus cluster, I have one PG that has been
perpetually stuck in the 'unknown' state. It appears to belong to
the device_health_metrics pool, which was created automatically by
the mgr daemon(?).
The OSDs that the PG maps to are all online and serving other PGs.
But when I list the PGs that belong to the OSDs from 'ceph pg map',
the offending PG is not listed.
# ceph pg dump pgs | grep ^1.0
dumped pgs
1.0 0 0 0 0 0
0 0 0 0 0 unknown
2020-08-08T09:30:33.251653-0500 0'0 0:0 []
-1 [] -1 0'0
2020-08-08T09:30:33.251653-0500 0'0
2020-08-08T09:30:33.251653-0500 0
# ceph osd pool stats device_health_metrics
pool device_health_metrics id 1
nothing is going on
# ceph pg map 1.0
osdmap e7199 pg 1.0 (1.0) -> up [41,40,2] acting [41,0]
What can be done to fix the PG? I tried doing a 'ceph pg repair
1.0', but that didn't seem to do anything.
Is it safe to try to update the crush_rule for this pool so that the
PG gets mapped to a fresh set of OSDs?
Yes, it would be. But still, it's weird. Mainly as the acting set is
so different from the up-set.
You have different CRUSH rules I think?
Marking those OSDs down might work, but otherwise change the
crush_rule and see how that goes.
Yes, I do have different crush rules to help map certain types of data
to different classes of hardware (EC HDDs, replicated SSDs, replicated
nvme). The default crush rule for the device_health_metrics pool was to
use replication across any storage device. I changed it to use the
replicated nvme crush rule, and now the map looks different:
# ceph pg map 1.0
osdmap e7256 pg 1.0 (1.0) -> up [24,22,12] acting [41,0]
However, the acting set of OSDs has not changed.
A little more info:
ceph status is reporting a slow OSD, which happens to be the primary OSD
for the offending PG:
health: HEALTH_WARN
1 pools have many more objects per pg than average
1 backfillfull osd(s)
2 nearfull osd(s)
Reduced data availability: 1 pg inactive
304 pgs not deep-scrubbed in time
2 pool(s) backfillfull
2294 slow ops, oldest one blocked for 1122032 sec, osd.41
has slow ops
The OSD log is getting spammed with messages about slow requests:
2020-08-21T20:18:22.196-0500 7f5dcf57c700 0 log_channel(cluster) log
[WRN] : slow request osd_op(client.467201.0:1 1.0 1.5c37f5a3 (undecoded)
ondisk+retry+read+known_if_redirected e10214) initiated
2020-08-21T05:20:21.215515-0500 currently queued for pg
2020-08-21T20:18:22.196-0500 7f5dcf57c700 0 log_channel(cluster) log
[WRN] : slow request osd_op(client.467201.0:2 1.0 1.501f1fd4 (undecoded)
ondisk+retry+read+known_if_redirected e10215) initiated
2020-08-21T05:35:21.215764-0500 currently queued for pg
2020-08-21T20:18:22.196-0500 7f5dcf57c700 0 log_channel(cluster) log
[WRN] : slow request osd_op(client.467201.0:2 1.0 1.501f1fd4 (undecoded)
ondisk+retry+read+known_if_redirected e10459) initiated
2020-08-21T16:50:21.252787-0500 currently queued for pg
2020-08-21T20:18:22.196-0500 7f5dcf57c700 -1 osd.41 10491
get_health_metrics reporting 271 slow ops, oldest is
osd_op(client.444105.0:1 1.0 1.5c37f5a3 (undecoded)
ondisk+retry+read+known_if_redirected e7022)
I am not sure how to interpret the log messages. I have restarted the
OSD multiple times, but the warnings persist.
--Mike
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx