Mimic - EC and crush rules - clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am trying to create an EC pool on my SSD based OSDs
and will appreciate if someone clarify / provide advice about the following

- best K + M combination for 4 hosts one OSD per host 
  My understanding is that K+M< OSD but using K=2, M=1 does not provide any redundancy 
  ( as soon as 1 OSD is down, you cannot write to the pool)
  Am I right ?

- assigning crush_rule as per documentation does not seem to work
If I provide all the crush rule details when I create the EC profile, the PGs are being placed on SSD OSDs  AND a crush rule is automatically create 
Is that the right/new way of doing it ?
EXAMPLE
ceph osd erasure-code-profile set erasureISA crush-failure-domain=osd k=3 m=2 crush-root=ssds plugin=isa technique=cauchy crush-device-class=ssd 

 
 [root@osd01 ~]#  ceph osd crush rule ls
replicated_rule
erasure-code
ssdrule
[root@osd01 ~]# ceph osd crush rule dump ssdrule
{
    "rule_id": 2,
    "rule_name": "ssdrule",
    "ruleset": 2,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -4,
            "item_name": "ssds"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

[root@osd01 ~]# ceph osd pool set test crush_rule 2
Error ENOENT: crush rule 2 does not exist

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux