Re: how to handle incomplete PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am using replicated pool, and min_size=1. I do not have any disk failure, so i do not expect incomplete PGs, but it appeared after OSD flaped.



huxiaoyu@xxxxxxxxxxxx
 
From: Eugen Block
Date: 2020-08-15 09:39
To: huxiaoyu
CC: ceph-users
Subject: Re:  how to handle incomplete PGs
Hi,
 
did you wait for the backfill to complete before removing the old  
drives? What is your environment? Are the affected PGs from an EC  
pool? Does [1] apply to you?
 
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035743.html
 
 
Zitat von huxiaoyu@xxxxxxxxxxxx:
 
> Dear Ceph folks,
>
> Recently i encountered incomplete PGs when replacing an OSD node  
> with new handware. I noticed multiple OSD ups and downs, and  
> eventually a few PGs got stucked at PG incomplete status.
>
> Questions 1: is there a reliable way to avoid the occurence of  
> incomplete PGs?
>                  2: is there a good tool or scriptes to handle  
> incomplete PGs without lossing data
>
> best regards,
>
> samuel
>
>
>
> huxiaoyu@xxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
 
 
 
 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux