Re: LRC ec pool behaviour

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Shylesh,

On 28/05/2015 21:25, shylesh kumar wrote:
> Hi,
> 
> I created a LRC ec pool with the configuration
> 
> # ceph osd erasure-code-profile get  mylrc
> directory=/usr/lib64/ceph/erasure-code
> k=4
> l=3
> m=2
> plugin=lrc
> ruleset-failure-domain=osd
> 
> 
> 
> 
> One of the pg mapping looks like
> ----------------
> 11.4    579     0       0       0       0       11204014080     6       6       active+clean    2015-05-28 10:14:40.937902      6771'15221      6771:16093      [0,1,9,2,8,6,7,4]       0       [0,1,9,2,8,6,7,4]       0       6746'2718       2015-05-28 03:07:01.864631      0'0     2015-05-21 03:37:24.438653
> 
> 
> I am appending data  to an object in the above pg with the follwoing snippet using librados
> =======================
>   {
>                 char hello[4096] = {0,};
>                 (void) memcpy (hello, "hello world", 11);
>                 librados::bufferlist bl;
>                 bl.append(hello, 4096);
> 
>                 int i=5000;
>                 while (i > 0)    {
>                 ret = io_ctx.append("another2", bl, 4096);
>                 if (ret < 0) {
>                         std::cerr << "Couldn't write object! error " << ret << std::endl;
>                 //        exit(EXIT_FAILURE);
>                 } else {
>                         std::cout << "Wrote new object 'newobj' " << std::endl;
>                 }
>                 i--;
>                 }
>         }
> ======================
> 
> 
> I could see that stripe_0 is on osd.0 , till now I wrote data nearly 7MB , all the data is going to osd.0 and stripe_1 is on osd.1 with just a truncated file with no data ..osd.9 has global parity etc..

That does not seem right. 

> I was wondering is there any chunk size limit in LRC so that it will not write to other stripes till it fills upto the chunk size in one stripe.

To assert if you have discovered a problem, I wrote a test case that look similar to what you describe. You can repeat it with https://github.com/ceph/ceph/pull/4797 as follows:

  git clone -b wip-11665-erasure-code-lrc https://github.com/dachary/ceph ceph
  cd ceph
  ./run-make-check.sh

and repeat it after that with

  cd ceph/src
  test/osd/osd-scrub-repair.sh TEST_corrupt_and_repair_lrc

I would be happy to repeat the problem you had if you provide me with specific instructions to do so. 

Cheers

> 
> 
> 
> -- 
> Thanks 
> Shylesh
>  

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux