Re: usable size for replicated pool with custom rule in pacific dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Francois,

I'm not an expert on CRUSH rule internals, but I checked the code and
it assumes that the failure domain (first choose/chooseleaf step) there is
"room": since there are just 2 rooms vs. 3 replicas, it doesn't allow you
to create a pool with a rule that might not optimally work (keep in mind
that Dashboard tries to perform some extra validations compared to the Ceph
CLI).

Kind Regards,
Ernesto


On Thu, Sep 9, 2021 at 12:29 PM Francois Legrand <fleg@xxxxxxxxxxxxxx>
wrote:

> Hi all,
>
> I have a test ceph cluster with 4 osd servers containing each 3 osds.
>
> The crushmap uses 2 rooms with 2 servers in each room.
>
> We use replica 3 for pools.
>
> I have the following custom crushrule to ensure that I have at least one
> copy of each data in each room.
>
> rule replicated3over2rooms {
>      id 1
>      type replicated
>      min_size 3
>      max_size 4
>      step take default
>      step choose firstn 0 type room
>      step chooseleaf firstn 2 type host
>      step emit
> }
>
> Everything was working well in nautilus/centos7 (I could create pools
> using the dashboard and my custom rule).
>
> I upgraded to pacific/ubuntu 20.04 in containers with cephadm.
>
> Now, I cannot create a new pool with replicated3over2rooms using the
> dashboard !
>
> If I choose Pool type = replicated, Replicated size = 3, Crush ruleset =
> replicated3over2rooms
>
> The dashboard says :
>
> Minimum: 3
> Maximum: 2
> The size specified is out of range. A value from 3 to 2 is usable.
>
> And inspecting  replicatedover2rooms ruleset in the dashboard says that
> the parameters are
>
> max_size 4
> min_size 3
> rule_id 1
> usable_size 2
>
> Where that usable_size comes from ? How to correct it ?
>
> If i run the command line
>
> ceph osd pool create test 16 replicated   replicated3over2rooms 3
>
> it works !!
>
> Thanks.
>
> F.
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux