Re: usable size for replicated pool with custom rule in pacific dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You are probably right ! But this "verification" seems "stupid" !

I created an additional room (with no osd) and then the doashboard doesn't complain anymore !

Indeed, the rule does what we want because  "step choose firstn 0 type room" will select the different rooms (2 in our case) and for the first one will put 2 copies on different hosts (step chooseleaf firstn 2 type host) and then goes to the  remaining room and put the third copy there (and eventually fouth if we choose replica 4).

Enforcing the first rule (step choose firstn 0 type room) to have as many choice (rooms) as replica means that the second step is then rather useless ! That's why it appears to me that this verification is somewhat "stupid"... The check should be that the number of replica is not greater than the number of rooms x the number of leafs in the second step (2 in my case)... but maybe I missed something !

F.


Le 09/09/2021 à 13:23, Ernesto Puerta a écrit :
Hi Francois,

I'm not an expert on CRUSH rule internals, but I checked the code and it assumes that the failure domain (first choose/chooseleaf step) there is "room": since there are just 2 rooms vs. 3 replicas, it doesn't allow you to create a pool with a rule that might not optimally work (keep in mind that Dashboard tries to perform some extra validations compared to the Ceph CLI).

Kind Regards,
Ernesto


On Thu, Sep 9, 2021 at 12:29 PM Francois Legrand <fleg@xxxxxxxxxxxxxx <mailto:fleg@xxxxxxxxxxxxxx>> wrote:

    Hi all,

    I have a test ceph cluster with 4 osd servers containing each 3 osds.

    The crushmap uses 2 rooms with 2 servers in each room.

    We use replica 3 for pools.

    I have the following custom crushrule to ensure that I have at
    least one
    copy of each data in each room.

    rule replicated3over2rooms {
         id 1
         type replicated
         min_size 3
         max_size 4
         step take default
         step choose firstn 0 type room
         step chooseleaf firstn 2 type host
         step emit
    }

    Everything was working well in nautilus/centos7 (I could create pools
    using the dashboard and my custom rule).

    I upgraded to pacific/ubuntu 20.04 in containers with cephadm.

    Now, I cannot create a new pool with replicated3over2rooms using the
    dashboard !

    If I choose Pool type = replicated, Replicated size = 3, Crush
    ruleset =
    replicated3over2rooms

    The dashboard says :

    Minimum: 3
    Maximum: 2
    The size specified is out of range. A value from 3 to 2 is usable.

    And inspecting  replicatedover2rooms ruleset in the dashboard says
    that
    the parameters are

    max_size 4
    min_size 3
    rule_id 1
    usable_size 2

    Where that usable_size comes from ? How to correct it ?

    If i run the command line

    ceph osd pool create test 16 replicated replicated3over2rooms 3

    it works !!

    Thanks.

    F.



    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux