Please provide more details about your environment, otherwise it's
just guessing what could have happened.
Zitat von huxiaoyu@xxxxxxxxxxxx:
I am using replicated pool, and min_size=1. I do not have any disk
failure, so i do not expect incomplete PGs, but it appeared after
OSD flaped.
huxiaoyu@xxxxxxxxxxxx
From: Eugen Block
Date: 2020-08-15 09:39
To: huxiaoyu
CC: ceph-users
Subject: Re: how to handle incomplete PGs
Hi,
did you wait for the backfill to complete before removing the old
drives? What is your environment? Are the affected PGs from an EC
pool? Does [1] apply to you?
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035743.html
Zitat von huxiaoyu@xxxxxxxxxxxx:
Dear Ceph folks,
Recently i encountered incomplete PGs when replacing an OSD node
with new handware. I noticed multiple OSD ups and downs, and
eventually a few PGs got stucked at PG incomplete status.
Questions 1: is there a reliable way to avoid the occurence of
incomplete PGs?
2: is there a good tool or scriptes to handle
incomplete PGs without lossing data
best regards,
samuel
huxiaoyu@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx