Some doubts on implementing erasure codes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

I am trying to implement a new erasure code in ceph. I am running into a few problems and I am hoping you guys may be able to help me.

1. For my erasure scheme, I am each OSD to not send me the entire chunk but some (pre-defined) function of that. Is there already some infrastructure setup to do that? Or if not, where exactly can I implement it?

2. In my current vstart setup, I find that setting an osd as OUT will create a new remapped pg. In the below example, I have a total of 7 osds with the erasure scheme of reed_sol_van with k=5 and m=2

$ ceph pg map 8.0 # Id if my erasure coded pool
osdmap e60 pg 8.0 (8.0) -> up [3,5,1,4,2,6,0] acting [3,5,1,4,2,6,0]

$ ceph osd out 0
marked out osd.0.

# after a few seconds
$ ceph pg map 8.0
osdmap e63 pg 8.0 (8.0) -> up [3,5,1,4,2147483647,6,2] acting [3,5,1,4,2,6,2]


Does this create a new PG out of somewhere? Even the health of the cluster is HEALTH_OK. How can I make sure that the cluster runs in a degraded state so that I can check, when I deploy my erasure code, that it is indeed working correctly?


Thank you for your help.

Aaditya M Nair
IIIT - Hyderabad

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux