Re: Large number of misplaced PGs but little backfill going on

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 25, 2024 at 10:58:24PM +0100, Kai Stian Olstad wrote:
On Mon, Mar 25, 2024 at 09:28:01PM +0100, Torkil Svensgaard wrote:
My tally came to 412 out of 539 OSDs showing up in a blocked_by list and that is about every OSD with data prior to adding ~100 empty OSDs. How 400 read targets and 100 write targets can only equal ~60 backfills with osd_max_backill set at 3 just makes no sense to me but alas.

It seems I can just increase osd_max_backfill even further to get the numbers I want so that will do. Thank you all for taking the time to look at this.

It's a huge change and 42% of you data need to be moved.
And this move is not only to the new OSD but also between the existing OSD, but
they are busy with backfilling so they have no free backfill reservation.

I do recommend this document by Joshua Baergen at Digital Ocean that explains
backfilling and the problem with it and there solution, a tool called pgremapper.

Forgot the link
https://ceph.io/assets/pdfs/user_dev_meeting_2023_10_19_joshua_baergen.pdf

--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux