Quincy: osd_pool_default_crush_rule being ignored?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,

my cluster has two CRUSH rules: the default replicated_rule (rule_id 0), and another rule named rack-aware (rule_id 1).

Now, if I'm not misreading the config reference, I should be able to define that all future-created pools use the rack-aware rule, by setting osd_pool_default_crush_rule to 1.

I've verified that this option is defined in src/common/options/global.yaml.in, so the "global" configuration section should be the applicable one (I did try with "mon" and "osd" also, for good measure).

However, setting this option, in Quincy, apparently has no effect:

# ceph config set global osd_pool_default_crush_rule 1
# ceph osd pool create foo
pool 'foo' created
# ceph osd pool ls detail | grep foo
# pool 9 'foo' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 264 flags hashpspool stripe_width 0

I am seeing this behaviour in 17.2.7. After an upgrade to Reef (18.2.4) it is gone, the option behaves as documented, and new pools are created with a crush_rule of 1:

# ceph osd pool create bar
pool 'bar' created
# ceph osd pool ls detail | grep bar
pool 10 'bar' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 302 flags hashpspool stripe_width 0 read_balance_score 4.00

However, the documentation at https://docs.ceph.com/en/quincy/rados/configuration/pool-pg-config-ref/#confval-osd_pool_default_crush_rule asserts that osd_pool_default_crush_rule should already work in Quincy, and the Reef release notes at https://docs.ceph.com/en/latest/releases/reef/ don't mention a fix covering this.

Am I doing something wrong? Is this a documentation bug, and the option can't work in Quincy? Was this "accidentally" fixed at some point in the Reef cycle?

Thanks in advance for any insight you might be able to share.

Cheers,
Florian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux