Fwd: Erasure code Plugins

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi All,

Any help in this regard will be appreciated.

Thanks..
Daleep Singh Bais


-------- Forwarded Message --------
Subject: Erasure code Plugins
Date: Fri, 19 Feb 2016 12:13:36 +0530
From: Daleep Singh Bais <daleepbais@xxxxxxxxx>
To: ceph-users <ceph-users@xxxxxxxx>


Hi All,

I am experimenting with erasure profiles and would like to understand more about them. I created an LRC profile based on http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/

The LRC profile created by me is

ceph osd erasure-code-profile get lrctest1
k=2
l=2
m=2
plugin=lrc
ruleset-failure-domain=host
ruleset-locality=host
ruleset-root=default

However, when I create a pool based on this profile, I see a health warning in ceph -w ( 128 pgs stuck inactive and 128 pgs stuck unclean). This is the first pool in cluster.

As i understand, m is parity bit and l will create additional parity bit for data bit k. Please correct me if I am wrong.

Below is output of ceph -w

health HEALTH_WARN
            128 pgs stuck inactive
            128 pgs stuck unclean
     monmap e7: 1 mons at {node1=192.168.1.111:6789/0}
            election epoch 101, quorum 0 node1
     osdmap e928: 6 osds: 6 up, 6 in
            flags sortbitwise
      pgmap v54114: 128 pgs, 1 pools, 0 bytes data, 0 objects
            10182 MB used, 5567 GB / 5589 GB avail
                 128 creating


Any help or guidance in this regard is highly appreciated.

Thanks,

Daleep Singh Bais


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux