Re: Mimic - EC and crush rules - clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, when creating an EC profile, it automatically creates a CRUSH rule specific for that EC profile.  You are also correct that 2+1 doesn't really have any resiliency built in.  2+2 would allow 1 node to go down while still having your data accessible.  It will use 2x data to raw as opposed to the 1.5x of 2+1, but it gives you resiliency.  The example in your command of 3+2 is not possible with your setup.  May I ask why you want EC on such a small OSD count?  I'm guessing to not use as much storage on your SSDs, but I would just suggest going with replica with such a small cluster.  If you have a larger node/OSD count, then you can start seeing if EC is right for your use case, but if this is production data... I wouldn't risk it.

When setting the crush rule, it wants the name of it, ssdrule, not 2.

On Thu, Nov 1, 2018 at 1:34 PM Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
Hi,

I am trying to create an EC pool on my SSD based OSDs
and will appreciate if someone clarify / provide advice about the following

- best K + M combination for 4 hosts one OSD per host 
  My understanding is that K+M< OSD but using K=2, M=1 does not provide any redundancy 
  ( as soon as 1 OSD is down, you cannot write to the pool)
  Am I right ?

- assigning crush_rule as per documentation does not seem to work
If I provide all the crush rule details when I create the EC profile, the PGs are being placed on SSD OSDs  AND a crush rule is automatically create 
Is that the right/new way of doing it ?
EXAMPLE
ceph osd erasure-code-profile set erasureISA crush-failure-domain=osd k=3 m=2 crush-root=ssds plugin=isa technique=cauchy crush-device-class=ssd 

 
 [root@osd01 ~]#  ceph osd crush rule ls
replicated_rule
erasure-code
ssdrule
[root@osd01 ~]# ceph osd crush rule dump ssdrule
{
    "rule_id": 2,
    "rule_name": "ssdrule",
    "ruleset": 2,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -4,
            "item_name": "ssds"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

[root@osd01 ~]# ceph osd pool set test crush_rule 2
Error ENOENT: crush rule 2 does not exist

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux