Re: 2-Layer CRUSH Map Rule?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Matthew,

You just have to take two steps when writing your crush rule. First you
want to get 3 different hosts, then you need 2 osd from each host.

ceph osd getcrushmap -o /tmp/crush
crushtool -d /tmp/crush -o /tmp/crush.txt

#edit it / make new rule

rule custom-ec-ruleset {
        id 3
        type erasure
        min_size 4
        max_size 6
        step take your-root
        step choose indep 3 type host
        step chooseleaf indep 2 type osd
        step emit
}

crushtool -c /tmp/crush.txt -o /tmp/crush.new

You can use the crushtool to test the mappings and make sure they are
working as you expect.

crushtool  -i /tmp/crush.new --test --show-mappings --rule 3 --num-rep 6

You can then compare the OSD id and make sure its exactly what you are
looking for.

You can set the crushmap with

ceph osd setcrushmap -i /tmp/crush.new

Note: If you are overwriting your current rule, your data will need to
rebalance as soon as your set the crushmap, close to 100% of your objects
will move. If you create a new rule, you can set your pool to use the new
pool id anytime you are ready.

On Sun, Sep 25, 2022 at 12:49 AM duluxoz <duluxoz@xxxxxxxxx> wrote:

> Hi Everybody (Hi Dr. Nick),
>
> TL/DR: Is is possible to have a "2-Layer" Crush Map?
>
> I think it is (although I'm not sure about how to set it up).
>
> My issue is that we're using 4-2 Erasure coding on our OSDs, with 7-OSDs
> per OSD-Node (yes, the Cluster is handling things AOK - we're running at
> about 65-70% utilisation of CPU, RAM, etc, so no problem there).
>
> However, we only have 3 OSD-Nodes, and I'd like to ensure that each Node
> has 2 of each pool's OSDs so that if we loose a Node the other 2 can
> take up the slack. I know(?) that with 4-2 EC we can loose 2 out of the
> 6 OSDs, but I'm worried that if we loose a Node it'll take more than 2
> OSDs with it, rendering us "stuffed" (stuffed: a technical term which is
> used as a substitute for a four-letter word rhyming with "truck") 😁
>
> Anyone have any pointers?
>
> Cheers
>
> Dulux-Oz
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Tyler Brekke
Senior Engineer I
tbrekke@xxxxxxxxxxxxxxxx
------------------------------
We're Hiring! <https://do.co/careers> | @digitalocean
<https://twitter.com/digitalocean> | YouTube
<https://www.youtube.com/digitalocean>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux