Re: Backfill full osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is the correct link sorry:
https://gist.github.com/Badb0yBadb0y/f29af56ab724603ac5fc385a680c4316

________________________________
From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Sent: Wednesday, November 6, 2024 10:09 PM
To: Eugen Block <eblock@xxxxxx>; ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Re: Backfill full osds

Thank you, I've collected some outputs here: https://gist.githubusercontent.com/Badb0yBadb0y/f29af56ab724603ac5fc385a680c4316/raw/95959203701a8cc0c85312a69dc77de25fc347d9/gistfile1.txt


________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Wednesday, November 6, 2024 9:11 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  Re: Backfill full osds

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi,

depending on the actual size of the PGs and OSDs, it could be
sufficient to temporarily increase the backfillfull_ratio (default
90%) to 91% or 92%, at 95% is the cluster is considered full, so you
need to be really careful with those ratios. If you provided more
details about the current state, the community might have a couple of
more ideas. I haven't used the pgremapper yet so I can't really
comment on that.

Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

> Hi,
>
> We have a 60% full cluster where unfortunately pg scaler hasn't beek
> used and during the time generated gigantic pgs which when moved
> from 1 osd to the other it gets full and blocks write.
> Tried to add 2 new node but before they would he utilised other osd
> gets full.
> Trying now backfill full osds to change weight and crush reweight
> but feels like running after something which can't be catched.
>
> What's the best practice to come out of this situation?
>
> Remap pg manually to empty osd with some remapper tool? I scare it
> will not check pg size.
>
> Increasing now the pg will fill up I guess again because cleaning
> happens after pg increase.
>
>
>
> Istvan
>
> ________________________________
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by
> copyright or other legal rules. If you have received it by mistake
> please let us know by reply email and delete it from your system. It
> is prohibited to copy this message or disclose its content to
> anyone. Any confidentiality or privilege is not waived or lost by
> any mistaken delivery or unauthorized disclosure of the message. All
> messages sent to and from Agoda may be monitored to ensure
> compliance with company policies, to protect the company's interests
> and to remove potential malware. Electronic messages may be
> intercepted, amended, lost or deleted, or contain viruses.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux