pg_autoscaler is not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I enabled pg_autoscaler on a specific pool ssd.
I failed to increase pg_num / pgp_num on pools ssd to 1024:
root@ld3955:~# ceph osd pool autoscale-status
 POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO 
TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
 cephfs_metadata  395.8M                3.0        118.9T 
0.0000                 4.0       8              off
 hdb_backup       713.2T                3.0         1354T 
1.5793                 1.0   16384              off
 nvme                 0                 2.0        23840G 
0.0000                 1.0     128              off
 cephfs_data       1068G                3.0        118.9T 
0.0263                 1.0      32              off
 hdd              733.9G                3.0        118.9T 
0.0181                 1.0    2048              off
 ssd               1711G                2.0        27771G 
0.1233                 1.0    1024              on

The target size for this pool is correctly set to 1024:
root@ld3955:~# ceph osd pool ls detail
pool 11 'hdb_backup' replicated size 3 min_size 2 crush_rule 1
object_hash rjenkins pg_num 16384 pgp_num 10977 pgp_num_target 16384
last_change 344888 lfor 0/0/319352 flags hashpspool,selfmanaged_snaps
stripe_width 0 pg_num_min 8192 application rbd
        removed_snaps [1~3]
pool 59 'hdd' replicated size 3 min_size 2 crush_rule 3 object_hash
rjenkins pg_num 2048 pgp_num 2048 last_change 319283 lfor
307105/317145/317153 flags hashpspool,selfmanaged_snaps stripe_width 0
pg_num_min 1024 application rbd
        removed_snaps [1~3]
pool 60 'ssd' replicated size 2 min_size 2 crush_rule 4 object_hash
rjenkins pg_num 512 pgp_num 512 pg_num_target 1024 pgp_num_target 1024
autoscale_mode on last_change 341736 lfor 305915/305915/305915 flags
hashpspool,selfmanaged_snaps,creating stripe_width 0 pg_num_min 512
application rbd
        removed_snaps [1~3]
pool 62 'cephfs_data' replicated size 3 min_size 2 crush_rule 3
object_hash rjenkins pg_num 32 pgp_num 32 last_change 319282 lfor
300310/300310/300310 flags hashpspool stripe_width 0 pg_num_min 32
application cephfs
pool 63 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 3
object_hash rjenkins pg_num 8 pgp_num 8 last_change 319280 flags
hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8
recovery_priority 5 application cephfs
pool 65 'nvme' replicated size 2 min_size 2 crush_rule 2 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 319281 flags hashpspool
stripe_width 0 pg_num_min 128 application rbd

However there's no activity on the cluster regarding pool ssd, means the
pg_num is not increasing.
The cluster is working for another pool hdb_backup though; the pg_num of
this pool was modified to 16384 recently (to be precise on Monday).

What makes things worse is that now I cannot increase pg_num (or
pgp_num) manually.
root@ld3955:~# ceph osd pool get ssd pg_num
pg_num: 512
root@ld3955:~# ceph osd pool get ssd pgp_num
pgp_num: 512
root@ld3955:~# ceph osd pool set ssd pg_num 1024
root@ld3955:~# ceph osd pool get ssd pg_num
pg_num: 512

How can I increase pg_num / pgp_num of pool ssd?

THX

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux