Need erasure coding, pg and block size explanation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When we use a replicated pool of size 3 for example, each data, a block of 4MB is written on one PG which is distributed on 3 hosts (by default). The osd holding the primary will copy the block to OSDs holding the secondary and third PG.

With erasure code, let's take a raid5 schema like k=2 and m=1. Does Ceph buffer the data till it reach a amount of 8 MB which it can then divide into two blocks of 4MB and a parity control of 4MB  ? Does it just divide the data in two chunks whatever the size ? Will it use then PG1 on OSD.A  to store the first block, PG1 on OSD.X to store the second block of data and PG1 on OSD.z to store the parity ?

Thanks for your explanation because i didn't found any clear explanation on how data chunk and parity
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux