Erasure Profile Pool caps at pg_num 1024

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Everyone,

i've run into problems with placement groups.

We have a 12 Host Ceph-Cluster with 408 OSDs (hdd and ssd).

I create a replicated pool with a large pg_num (16384). No problems everything works.

If I do this with an erasure pool, i get a warning which is fixable by extending the max_pgs_per_osd and afterwards i get a health_warn in the ceph status due too few PGs.

Checking the pg_num after pool creation, it is capped at 1024.

I'm stuck at this point. Maybe i did something fundamentally wrong?

To illustrate my steps I tried to summarize everything in a small example:

# ceph -v
ceph version 14.2.7 (fb8e34a687d76cd3bd45c2a0fb445432ab69b4ff) nautilus (stable)

# ceph osd erasure-code-profile get myerasurehdd
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=7
m=5
plugin=jerasure
technique=reed_sol_van
w=8

# ceph osd crush rule dump sas_rule
{
    "rule_id": 0,
    "rule_name": "sas_rule",
    "ruleset": 0,
    "type": 3,
    "min_size": 1,
    "max_size": 12,
    "steps": [
        {
            "op": "take",
            "item": -2,
            "item_name": "default~hdd"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "rack"
        },
        {
            "op": "emit"
        }
    ]
}


# ceph osd pool create sas-pool 16384 16384 erasure myerasurehdd sas_rule
Error ERANGE:  pg_num 16384 size 12 would mean 196704 total pgs, which exceeds max 102000 (mon_max_pg_per_osd 250 * num_in_osds 408)

# ceph tell mon.\* injectargs '--mon-max-pg-per-osd=500'
mon.ceph-fs01: injectargs:mon_max_pg_per_osd = '500' (not observed, change may require restart) mon.ceph-fs05: injectargs:mon_max_pg_per_osd = '500' (not observed, change may require restart) mon.ceph-fs09: injectargs:mon_max_pg_per_osd = '500' (not observed, change may require restart) ceph-fs01:/opt/ceph-setup# ceph osd pool create sas-pool 16384 16384 erasure myerasurehdd sas_rule
pool 'sas-pool' created

# ceph -s

cluster:
    id:     b9471b57-95a2-4e58-8f69-b5e6048bea7c
    health: HEALTH_WARN
            Reduced data availability: 1024 pgs incomplete
            too few PGs per OSD (7 < min 30)


# ceph osd pool get sas-pool pg_num
pg_num: 1024

Best regards,

Gunnar

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux