Re: 1 PG stucked in "active+undersized+degraded for long time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen,

Requested details are as below.

PG ID: 15.28f0
Pool ID: 15
Pool:  default.rgw.buckets.data   
Pool EC Ratio: 8: 3
Number of Hosts: 12

## crush dump for rule ##
#ceph osd crush rule dump data_ec_rule
{
    "rule_id": 1,
    "rule_name": "data_ec_rule",
    "ruleset": 1,
    "type": 3,
    "min_size": 3,
    "max_size": 11,
    "steps": [
        {
            "op": "set_chooseleaf_tries",
            "num": 5
        },
        {
            "op": "set_choose_tries",
            "num": 100
        },
        {
            "op": "take",
            "item": -50,
            "item_name": "root_data~hdd"
        },
        {
            "op": "chooseleaf_indep",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

## From Crushmap dump ##
rule data_ec_rule {
	id 1
	type erasure
	min_size 3
	max_size 11
	step set_chooseleaf_tries 5
	step set_choose_tries 100
	step take root_data class hdd
	step chooseleaf indep 0 type host
	step emit
}

## EC Profile ##
ceph osd erasure-code-profile get data
crush-device-class=hdd
crush-failure-domain=host
crush-root=root_data
jerasure-per-chunk-alignment=false
k=8
m=3
plugin=jerasure
technique=reed_sol_van
w=8

OSD Tree:
https://pastebin.com/raw/q6u7aSeu
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux