Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Which version do you use? Quincy has currently incorrect values for it's new IOPS scheduler, this will be fixed in the next release (hopefully soon). But there are workaround, please check the mailing list about this, I'm in a hurry so can't point directly to the correct post. 

Best regards, 
Sake

On 14 Sept 2023 07:55, sharathvuthpala@xxxxxxxxx wrote:

Hi,

We have HDD disks.

Today, after almost 36 hours, Rebuilding Data Resiliency is 58% and still going on. The good thing is it is not stuck at 5%.

Does it take this long to complete rebuilding resiliency process whenever there is a maintenance in the cluster?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux