Re: Some doubts on implementing erasure codes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 8, 2018 at 3:43 PM, Aaditya M Nair
<aadityanair6494@xxxxxxxxx> wrote:
> Hey,
>
> I am trying to implement a new erasure code in ceph. I am running into a few
> problems and I am hoping you guys may be able to help me.
>
> 1. For my erasure scheme, I am each OSD to not send me the entire chunk but
> some (pre-defined) function of that. Is there already some infrastructure
> setup to do that? Or if not, where exactly can I implement it?

Hmm, I thought this was easy to set up but now I'm not so sure. Can
you explain your strategy a bit more and describe what kind of API
you're looking for?


> 2. In my current vstart setup, I find that setting an osd as OUT will create
> a new remapped pg. In the below example, I have a total of 7 osds with the
> erasure scheme of reed_sol_van with k=5 and m=2
>
> $ ceph pg map 8.0 # Id if my erasure coded pool
> osdmap e60 pg 8.0 (8.0) -> up [3,5,1,4,2,6,0] acting [3,5,1,4,2,6,0]
>
> $ ceph osd out 0
> marked out osd.0.
>
> # after a few seconds
> $ ceph pg map 8.0
> osdmap e63 pg 8.0 (8.0) -> up [3,5,1,4,2147483647,6,2] acting
> [3,5,1,4,2,6,2]
>
>
> Does this create a new PG out of somewhere? Even the health of the cluster
> is HEALTH_OK. How can I make sure that the cluster runs in a degraded state
> so that I can check, when I deploy my erasure code, that it is indeed
> working correctly?

2147483647 is -1 (in 32-bit signed integers represented as an
unsigned) — with only 7 OSDs and one unavailable, CRUSH is failing to
map 7 OSDs to host the shards. However, because the unmapped shard
already exists on one of the OSDs, it is keeping that shard on the
previous host (OSD 2).
-Greg

>
>
> Thank you for your help.
>
> Aaditya M Nair
> IIIT - Hyderabad
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux