incorrect pool size, wrong ruleset?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have 2 hosts with 8 2TB drive in each.
I want to have 2 replicas between both hosts and then 2 replicas between osds on each host. That way even when I lost one host I still have 2 replicas.

Currently I have this ruleset:

rule repl {
        ruleset 5
        type replicated
        min_size 1
        max_size 10
        step take asterix
        step choose firstn -2 type osd
        step emit
        step take obelix
        step choose firstn 2 type osd
        step emit
}

Which works ok. I have 4 replicas as I want and PGs are distributed perfectly but when I run ceph df I have only 1/2 of my capacity which I should have.
In total it's 32TB, 16TB in each host. If there is a 2 replicas on each host it should report around 8TB, right? It's reporting only 4TB in pool which is 1/8 of total capacity.
Can anyone tell me what is wrong?

Thanks

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux