Re: Expected behaviour when pg_autoscale_mode off

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maybe try to use on warn mode so the decision is yours when to apply the suggested number.
Or if you know the pool size what it is going to be, just turn off the autoscaler for that pool and keep it for the ones which you don't know the expected size.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Sandor Zeestraten <sandor@xxxxxxxxxxxxxxx>
Sent: Friday, April 22, 2022 4:15 PM
To: ceph-users@xxxxxxx
Subject:  Expected behaviour when pg_autoscale_mode off

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi folks,

I am testing out the pg_autoscale_mode on a new and empty 16.2.7 cluster and was a bit confused by its behaviour.
I created a pool with a specified amount of PGs roughly based on pgcalc and expected size like in the old days.
The autoscaler then started lowering the PGs as expected as the pool was still empty and I haven't set any target_size.
However, after I turned off pg_autoscale_mode on the pool, it kept on lowering the PGs down to what in my case was the new pg_num_target of 32.
Is this intended behaviour even after turning off pg_autoscale_mode? If so, are there any other ways of stopping it or am I in an edge case as it is assumed that target_size is set?

See below for example where pg_num and pgp_num kept on lowering on the pool `testbench` even after setting pg_autoscale_mode off:

root:~# ceph osd pool create testbench 4096 4096 pool 'testbench' created

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4096 pgp_num 3994 pg_num_target 32 pgp_num_target 32 autoscale_mode on last_change 10016 lfor 0/0/10011 flags hashpspool stripe_width 0

root:~# ceph osd pool set testbench pg_autoscale_mode off set pool 6 pg_autoscale_mode to off

root:~# ceph osd pool autoscale-status
POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO
 TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  PROFILE
device_health_metrics      0                 3.0        769.3T  0.0000
                             1.0       1              off        scale-up
testbench                  0                 3.0        769.3T  0.0000
                             1.0      32              off        scale-up

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4082 pgp_num 3880 pg_num_target 32 pgp_num_target 32 autoscale_mode off last_change 10070 lfor 0/10070/10068 flags hashpspool stripe_width 0

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4072 pgp_num 3870 pg_num_target 32 pgp_num_target 32 pg_num_pending 4071 autoscale_mode off last_change 10110 lfor 0/10110/10110 flags hashpspool stripe_width 0

All the best
--
Sandor Zeestraten
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux