Expected behaviour when pg_autoscale_mode off

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

I am testing out the pg_autoscale_mode on a new and empty 16.2.7 cluster
and was a bit confused by its behaviour.
I created a pool with a specified amount of PGs roughly based on pgcalc and
expected size like in the old days.
The autoscaler then started lowering the PGs as expected as the pool was
still empty and I haven't set any target_size.
However, after I turned off pg_autoscale_mode on the pool, it kept on
lowering the PGs down to what in my case was the new pg_num_target of 32.
Is this intended behaviour even after turning off pg_autoscale_mode? If so,
are there any other ways of stopping it or am I in an edge case as it is
assumed that target_size is set?

See below for example where pg_num and pgp_num kept on lowering on the pool
`testbench` even after setting pg_autoscale_mode off:

root:~# ceph osd pool create testbench 4096 4096
pool 'testbench' created

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502
flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 4096 pgp_num 3994 pg_num_target 32 pgp_num_target 32
autoscale_mode on last_change 10016 lfor 0/0/10011 flags hashpspool
stripe_width 0

root:~# ceph osd pool set testbench pg_autoscale_mode off
set pool 6 pg_autoscale_mode to off

root:~# ceph osd pool autoscale-status
POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO
 TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  PROFILE
device_health_metrics      0                 3.0        769.3T  0.0000
                             1.0       1              off        scale-up
testbench                  0                 3.0        769.3T  0.0000
                             1.0      32              off        scale-up

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502
flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 4082 pgp_num 3880 pg_num_target 32 pgp_num_target 32
autoscale_mode off last_change 10070 lfor 0/10070/10068 flags hashpspool
stripe_width 0

root:~# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 502
flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 6 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 4072 pgp_num 3870 pg_num_target 32 pgp_num_target 32
pg_num_pending 4071 autoscale_mode off last_change 10110 lfor 0/10110/10110
flags hashpspool stripe_width 0

All the best
--
Sandor Zeestraten
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux