Hi,
depending on the actual size of the PGs and OSDs, it could be
sufficient to temporarily increase the backfillfull_ratio (default
90%) to 91% or 92%, at 95% is the cluster is considered full, so you
need to be really careful with those ratios. If you provided more
details about the current state, the community might have a couple of
more ideas. I haven't used the pgremapper yet so I can't really
comment on that.
Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
Hi,
We have a 60% full cluster where unfortunately pg scaler hasn't beek
used and during the time generated gigantic pgs which when moved
from 1 osd to the other it gets full and blocks write.
Tried to add 2 new node but before they would he utilised other osd
gets full.
Trying now backfill full osds to change weight and crush reweight
but feels like running after something which can't be catched.
What's the best practice to come out of this situation?
Remap pg manually to empty osd with some remapper tool? I scare it
will not check pg size.
Increasing now the pg will fill up I guess again because cleaning
happens after pg increase.
Istvan
________________________________
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by
copyright or other legal rules. If you have received it by mistake
please let us know by reply email and delete it from your system. It
is prohibited to copy this message or disclose its content to
anyone. Any confidentiality or privilege is not waived or lost by
any mistaken delivery or unauthorized disclosure of the message. All
messages sent to and from Agoda may be monitored to ensure
compliance with company policies, to protect the company's interests
and to remove potential malware. Electronic messages may be
intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx