Re: How proceed to change a crush rule and remap pg's?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't think that there's a feasible way to do this in a controlled manner. I would just change it and trust Ceph's remapping mechanism to work properly.

You could use crushtool to calculate what the new mapping is and then do something crazy with upmaps (move them manually to the new locations one by one and then remove all upmaps and change the rule)... but that's quite annoying to do and probably doesn't really help.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Tue, Nov 19, 2019 at 11:11 AM Maarten van Ingen <maarten.vaningen@xxxxxxxxxxx> wrote:
Hi,

I have a small but impacting error in my crush rules.
For unknown reasons the rules are not using host but osd to place the data and thus we have some nodes with all three copies instead of three different nodes.
We noticed this when rebooting a node and a pg became stale.

My crush rule:
    {
        "rule_id": 0,
        "rule_name": "replicated_rule",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -2,
                "item_name": "default~hdd"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "osd"
            },
            {
                "op": "emit"
            }
        ]
    },


Type should be host of course. And I want to alter this and move pg's such that all is as should.
How can I best proceed in correcting this issue? I do like to throttle the remapping of the data so ceph itself won't be unavailable while the data is redistributed.

We are running on Mimic (13.2.6), and this environment has been installed freshly as Mimic while using ceph-ansible. 

Current ceph -s output:

  cluster:

    id:     <<fsid>

    health: HEALTH_OK

 

  services:

    mon: 3 daemons, quorum mon01,mon02,mon03

    mgr: mon01(active), standbys: mon02, mon03

    mds: cephfs-2/2/2 up  {0=mon03=up:active,1=mon01=up:active}, 1 up:standby

    osd: 502 osds: 502 up, 502 in

 

  data:

    pools:   18 pools, 8192 pgs

    objects: 28.74 M objects, 100 TiB

    usage:   331 TiB used, 2.3 PiB / 2.6 PiB avail

    pgs:     8192 active+clean


Cheers,

Maarten van Ingen
| Systems Expert | Distributed Data Processing | SURFsara | Science Park 140 | 1098 XG Amsterdam |
| T +31 (0) 20 800 1300 | maarten.vaningen@xxxxxxxxxxx | https://surfsara.nl |

We are ISO 27001 certified and meet the high requirements for information security.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux