Re: Erasure code profile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This can be changed to a failure domain of OSD in which case it could satisfy the criteria.  The problem with a failure domain of OSD, is that all of your data could reside on a single host and you could lose access to your data after restarting a single host.

On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles <jelopez@xxxxxxxxxx> wrote:
Hi,

the default failure domain if not specified on the CLI at the moment you create your EC profile is set to HOST. So you need 14 OSDs spread across 14 different nodes by default. And you only have 8 different nodes.

Regards
JC

On 23 Oct 2017, at 21:13, Karun Josy <karunjosy1@xxxxxxxxx> wrote:

Thank you for the reply.

There are 8 OSD nodes with 23 OSDs in total. (However, they are not distributed equally on all nodes)

So it satisfies that criteria, right?



Karun Josy

On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles <jelopez@xxxxxxxxxx> wrote:
Hi,

yes you need as many OSDs that k+m is equal to. In your example you need a minimum of 14 OSDs for each PG to become active+clean.

Regards
JC

On 23 Oct 2017, at 20:29, Karun Josy <karunjosy1@xxxxxxxxx> wrote:

Hi,

While creating a pool with erasure code profile k=10, m=4, I get PG status as
"200 creating+incomplete"

While creating pool with profile k=5, m=3 it works fine.

Cluster has 8 OSDs with total 23 disks.

Is there any requirements for setting the first profile ?

Karun 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux