Re: cluster can't remapped objects after change crush tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, man. Thanks a lot. Now I'm understood. So, to be sure If I have 3 hosts,
replicating factor is also 3 and I have a crush rule like:
{
    "rule_id": 0,
    "rule_name": "replicated_rule",
    "ruleset": 0,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -1,
            "item_name": "default"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

My data is replicated across hosts, not across osds, all hosts have
pieces of data and a situation like:

* host0 has a piece of data on osd.0
* host1 has pieces of data on osd.1 and osd.2
* host2 has no data

is completely excluded?

Konstantin Shalygin writes:

> On 04/27/2018 04:37 PM, Igor Gajsin wrote:
>> pool 7 'rbd' replicated size 3 min_size 2 crush_rule 0
>
>
> Your pools have proper size settings - is 3. But you crush have only 2
> buckets for this rule (e.g. is your pods).
> For making this rule work you should have minimum of 3 'pod' buckets.
>
>
>
>
> k


--
With best regards,
Igor Gajsin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux