Reef v18.2.1: ceph osd pool autoscale-status gives empty output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Users,
I deployed a new cluster with v18.2.1 but noticed that pg_num and pgp_num
always remained 1 for the pools with autoscale turned on. Below is the env
and the relevant information

ceph> version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)
ceph> status
  cluster:
    id:     273c8410-a333-11ee-b3c2-9791c3098e2b
    health: HEALTH_WARN
            clock skew detected on mon.ec-rgw-s3

  services:
    mon: 3 daemons, quorum ec-rgw-s1,ec-rgw-s2,ec-rgw-s3 (age 48m)
    mgr: ec-rgw-s1.icpgxx(active, since 69m), standbys: ec-rgw-s2.quzjfv
    osd: 3 osds: 3 up (since 29m), 3 in (since 49m)
    rgw: 1 daemon active (1 hosts, 1 zones)

  data:
    pools:   8 pools, 23 pgs
    objects: 1.85k objects, 6.0 GiB
    usage:   4.8 GiB used, 595 GiB / 600 GiB avail
    pgs:     23 active+clean

ceph> osd pool get noautoscale
noautoscale is off
ceph> osd pool autoscale-status
ceph> osd pool autoscale-status
ceph> osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags
hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
read_balance_score 3.00
pool 2 'default.rgw.buckets.data' erasure profile ec-21 size 3 min_size 2
crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode off
last_change 111 lfor 0/0/55 flags hashpspool stripe_width 8192
compression_algorithm lz4 compression_mode force application rgw
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 34 flags
hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 37
flags hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 39
flags hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 6 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 41
flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
read_balance_score 3.00
pool 7 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule
0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 44
flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
read_balance_score 3.00
pool 8 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule
0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 47
flags hashpspool stripe_width 0 application rgw read_balance_score 3.00

I'd manually changed for pool ID 2.

Is this by any chance due to PR [1]?

[1] https://github.com/ceph/ceph/pull/53658

Thanks,
Jayanth
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux