Ceph is a large scale storage system. You're hoping that it is going to care about and split files that are 9 bytes in size. Do this same test with a 4MB file and see how it splits up the content of the file.
On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic <jonasjaszkowic.work@xxxxxxxxx> wrote:
I am currently evaluating erasure coding in Ceph. I wanted to know where my data and coding chunks are located, so I_______________________________________________followed the example at http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pooland setup an erasure coded pool with k=3 data chunks and m=2 coding chunks. I stored an object named 'NYAN‘ with content‚ABCDEFGHI‘ in the pool.The output of ceph osd map ecpool NYAN is following, which seems correct:osdmap e97 pool 'ecpool' (6) object 'NYAN' -> pg 6.bf243b9 (6.39) -> up ([3,1,0,2,4], p3) acting ([3,1,0,2,4], p3)But when I have a look at the chunks stored on the corresponding OSDs, I see three chunks containing the whole content of the original file (padded with zeros to a size of 4.0K)and two chunks containing nothing but zeros. I do not understand this behavior. According to the link above: "The NYAN object will be divided in three (K=3) and two additional chunks will be created (M=2).“, but what I experience is that the file is replicated three times in its whole and what appears to be the coding chunks (i.e. holding parity information) are objects containing nothing but zeros? Am I doing something wrong here?Any help is appreciated!Attached is the output on each OSD node with the path to the chunk and its content as hexdump:osd.0path: /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2md5sum: 1666ba51af756693678da9efc443ef44 /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2filesize: 4.0K /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2hexdump: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00000560osd.1path: /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1md5sum: 1666ba51af756693678da9efc443ef44 /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1filesize: 4.0K /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1hexdump: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00000560osd.2path: /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4 /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3filesize: 4.0K /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3hexdump: 00000000 41 42 43 44 45 46 47 48 49 0a 00 00 00 00 00 00 |ABCDEFGHI.......|00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00000560osd.3path: /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4 /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0filesize: 4.0K /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0hexdump: 00000000 41 42 43 44 45 46 47 48 49 0a 00 00 00 00 00 00 |ABCDEFGHI.......|00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00000560osd.4path: /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4 /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4filesize: 4.0K /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4hexdump: 00000000 41 42 43 44 45 46 47 48 49 0a 00 00 00 00 00 00 |ABCDEFGHI.......|00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00000560The erasure code profile used:jerasure-per-chunk-alignment=falsek=3m=2plugin=jerasureruleset-failure-domain=hostruleset-root=defaulttechnique=reed_sol_vanw=8
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com