Re: PG_BACKFILL_FULL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for the answer


On 17/01/23 09:27, Stefan Kooman wrote:
On 1/17/23 08:39, Iztok Gregori wrote:
Thank for your response and advice.

On 16/01/23 15:17, Boris Behrens wrote:
Hmm.. I ran into some similar issue.

IMHO there are two ways to work around the problem until the new disk in place: 1. change the backfill full threshold (I use these commands: https://www.suse.com/support/kb/doc/?id=000019724 <https://www.suse.com/support/kb/doc/?id=000019724>)

If I understand correctly the "backfillfull_ratio" is a ratio above which a warning is triggered and the cluster will deny backfilling to the OSD in question. But my OSD (87.53% ) is not above the ratio (90%). Granted, it is possible that "after" the 3 PGs are moved to that OSD the ratio will be crossed, but right now we are bellow.

I have checked the actual calculations, but Ceph might calculate the amount of space left on the OSD when the mapped backfills would have finished. And because of that trigger the warning.

This is a reasonable explaination it could be very well like that.

I'm just wondering why CEPH took that particular OSD in the first place. It could be that at the moment when the "gentle-reweight" was executed the OSD in question was not "nearfull" and so was choosen to be the backfill target and only after was filled up with data.

[cut]


Idally I would like just to manually set the new "location" of the PGs away from the nearfull OSD.60. I see there are some commands called "ceph osd pg-upmap" and "ceph osd pg-upmap-items" which could be the right tool for what I want to achieve. But I didn't found a lot of information about it, sombody knows something more, are those tools "safe" to run in my case?

Yes. A couple of tools rely heavily on "upmap". Including the ceph balancer itself. Instead of using "weights", which was the knob to use _before_ luminous, upmaps are the "new" (since Luminous) way of balancing clusters. Precisely for what you want to achieve, have a way to map PGs to OSDs.

If you want to use a tool, you can look at [1]. It's made and used by Digital Ocean. You can also put upmaps yourself, i.e. ceph osd pg-upmap-items $pg-id $osd-id

If I understand correctly the "pgremapper" tool uses "ceph osd
> pg-upmap-items" to "remap" PG to a different OSD, so I can use directly the latter. Because there are relatively free OSDs (58%) on the same server (as also the source OSD) I will use those as targets.


If possible (when you have new hardware and more space available) start with using upmaps to achieve better balancing. Newer tools, Ceph balancer as well, will use upmaps more and more. Not only to obtain more evenly distributed space utilization, but also (read) performance. Of you need more persuasion, you might want to watch this talk [2].

Thank you, I will take a look at the talk.


Cheers
Iztok


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux