Re: Mimic - EC and crush rules - clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The difference for 2+2 vs 2x replication isn't in the amount of space being used or saved, but in the amount of OSDs you can safely lose without any data loss or outages.  2x replication is generally considered very unsafe for data integrity, but 2+2 would is as resilient as 3x replication while only using as much space as 2x replication.

On Thu, Nov 1, 2018 at 11:25 PM Wladimir Mutel <mwg@xxxxxxxxx> wrote:
David Turner wrote:
> Yes, when creating an EC profile, it automatically creates a CRUSH rule
> specific for that EC profile.  You are also correct that 2+1 doesn't
> really have any resiliency built in.  2+2 would allow 1 node to go down
> while still having your data accessible.  It will use 2x data to raw as

        Is not EC 2+2 the same as 2x replication (i.e. RAID1) ?
        Is not EC benefit and intention to allow equivalent replication
        factors be chosen between >1 and <2 ?
        That's why it is recommended to have m<k in EC algorithm
        parameters. Because when you have m==k, it is equivalent to 2x
        replication, with m==2k - to 3x replication and so on.
        And correspondingly, with m==1 you have equivalent reliability
        of RAID5, with m==2 - that of RAID6, and you start to have more
        "interesting" reliability factors only when you could allow m>2
        and k>m. Overall, your reliability in Ceph is measured as a
        cluster rebuild/performance degradation time in case of
        up-to m OSDs failure, provided that no more than m OSDs
        (or larger failure domains) have failed at once.
        Sure, EC is beneficial only when you have enough failure domains
        (i.e. hosts). My criterion is that you should have more hosts
        than you have individual OSDs within a single host.
        I.e. at least 8 (and better >8) hosts when you have 8 OSDs
        per host.

> opposed to the 1.5x of 2+1, but it gives you resiliency.  The example in
> your command of 3+2 is not possible with your setup.  May I ask why you
> want EC on such a small OSD count?  I'm guessing to not use as much
> storage on your SSDs, but I would just suggest going with replica with
> such a small cluster.  If you have a larger node/OSD count, then you can
> start seeing if EC is right for your use case, but if this is production
> data... I wouldn't risk it.

> When setting the crush rule, it wants the name of it, ssdrule, not 2.



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux