usable size for replicated pool with custom rule in pacific dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have a test ceph cluster with 4 osd servers containing each 3 osds.

The crushmap uses 2 rooms with 2 servers in each room.

We use replica 3 for pools.

I have the following custom crushrule to ensure that I have at least one copy of each data in each room.

rule replicated3over2rooms {
    id 1
    type replicated
    min_size 3
    max_size 4
    step take default
    step choose firstn 0 type room
    step chooseleaf firstn 2 type host
    step emit
}

Everything was working well in nautilus/centos7 (I could create pools using the dashboard and my custom rule).

I upgraded to pacific/ubuntu 20.04 in containers with cephadm.

Now, I cannot create a new pool with replicated3over2rooms using the dashboard !

If I choose Pool type = replicated, Replicated size = 3, Crush ruleset = replicated3over2rooms

The dashboard says :

Minimum: 3
Maximum: 2
The size specified is out of range. A value from 3 to 2 is usable.

And inspecting  replicatedover2rooms ruleset in the dashboard says that the parameters are

max_size 4
min_size 3
rule_id 1
usable_size 2

Where that usable_size comes from ? How to correct it ?

If i run the command line

ceph osd pool create test 16 replicated   replicated3over2rooms 3

it works !!

Thanks.

F.



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux