Autoscale - enable or not on main pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I recently setup a new octopus cluster and was testing the autoscale
feature.  Used ceph-ansible so its enabled by default.  Anyhow, I have three
other clusters that are on nautilus, so I wanted to see if it made sense to
enable it there on the main pool.

 

Here is a print out of the autoscale status:

POOL                         SIZE TARGET SIZE RATE RAW CAPACITY  RATIO
TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE

default.rgw.buckets.non-ec     0               2.0       55859G 0.0000
1.0     32            on

default.rgw.meta            9298               3.0       55859G 0.0000
1.0     32            on

default.rgw.buckets.index  18058M              3.0       55859G 0.0009
1.0     32            on

default.rgw.control            0               3.0       55859G 0.0000
1.0     32            on

default.rgw.buckets.data    9126G              2.0       55859G 0.3268
1.0   4096       1024 off

.rgw.root                   3155               3.0       55859G 0.0000
1.0     32            on

rbd                        155.5G              2.0       55859G 0.0056
1.0     32            on

default.rgw.log            374.4k              3.0       55859G 0.0000
1.0     64            on

 

For this entry:

default.rgw.buckets.data    9126G              2.0       55859G 0.3268
1.0   4096       1024 off

 

I have it disabled because it showed a warn message, but its recommending a
1024 PG setting.  When I use the online ceph calculator at ceph.io, its
saying the 4096 setting is correct.  So why is autoscaler saying 1024?

 

There are 6 osd servers with 10 OSDs each ( all SSD ).  60 TB total.

 

Pool LS output:

pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 8800 lfor
0/0/344 flags hashpspool stripe_width 0 application rgw

pool 2 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 8799
lfor 0/0/346 flags hashpspool stripe_width 0 application rgw

pool 3 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 8798
lfor 0/0/350 flags hashpspool stripe_width 0 application rgw

pool 4 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0
object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 8802
lfor 0/0/298 flags hashpspool stripe_width 0 application rgw

pool 5 'default.rgw.buckets.index' replicated size 3 min_size 1 crush_rule 0
object_hash rjenkins pg_num 638 pgp_num 608 pg_num_target 32 pgp_num_target
32 autoscale_mode on last_change 10320 lfor 0/10320/10318 owner
18446744073709551615 flags hashpspool stripe_width 0 application rgw

pool 7 'default.rgw.buckets.data' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 9467 lfor 0/0/552
owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw

pool 8 'default.rgw.buckets.non-ec' replicated size 2 min_size 1 crush_rule
0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
8797 lfor 0/0/348 owner 18446744073709551615 flags hashpspool stripe_width 0
application rgw

pool 9 'rbd' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins
pg_num 32 pgp_num 32 autoscale_mode on last_change 8801 flags
hashpspool,selfmanaged_snaps stripe_width 0 application rbd

 

 

Regards,

-Brent

 

Existing Clusters:

Test: Ocotpus 15.2.5 ( all virtual on nvme )

US Production(HDD): Nautilus 14.2.11 with 11 osd servers, 3 mons, 4
gateways, 2 iscsi gateways

UK Production(HDD): Nautilus 14.2.11 with 18 osd servers, 3 mons, 4
gateways, 2 iscsi gateways

US Production(SSD): Nautilus 14.2.11 with 6 osd servers, 3 mons, 4 gateways,
2 iscsi gateways

UK Production(SSD): Octopus 15.2.5 with 5 osd servers, 3 mons, 4 gateways

 

 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux