Re: Quincy: osd_pool_default_crush_rule being ignored?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Still works:

quincy-1:~ # ceph osd crush rule create-simple simple-rule default osd
quincy-1:~ # ceph osd crush rule dump simple-rule
{
    "rule_id": 4,
...

quincy-1:~ # ceph config set mon osd_pool_default_crush_rule 4
quincy-1:~ # ceph osd pool create test-pool6
pool 'test-pool6' created
quincy-1:~ # ceph osd pool ls detail | grep test-pool
pool 24 'test-pool6' replicated size 2 min_size 1 crush_rule 4 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 2615 flags hashpspool stripe_width 0



Zitat von Florian Haas <florian@xxxxxxxxxxxxxx>:

On 25/09/2024 09:05, Eugen Block wrote:
Hi,

for me this worked in a 17.2.7 cluster just fine

Huh, interesting!

(except for erasure-coded pools).

Okay, *that* bit is expected. https://docs.ceph.com/en/quincy/rados/configuration/pool-pg-config-ref/#confval-osd_pool_default_crush_rule does say that the option sets the "default CRUSH rule to use when creating a replicated pool".

quincy-1:~ # ceph osd crush rule create-replicated new-rule default osd hdd

Mine was a rule created with "create-simple"; would that make a difference?

Cheers,
Florian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux