ec pools and radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am playing around with erasure coded pools on 0.78-348 (firefly) and am attempting to enable EC on the .rgw.buckets pool for radosgw
(fresh install).

If I use a plain EC profile (no settings changed), uploads of various sizes work fine and EC seems to be working based on how much space is being used in the cluster. If I start playing with k or m values, multipart uploads start failing (on the first chunk). I haven't seen issues with rados put or rados bench on EC pools. I saw the same behavior on the official v0.78 release.

I turned up verbose logging on OSDs and RGW and I don't see obvious errors. Here is a snippet from the RGW log from the context/thread that failed:

7f8224dfa700  1 -- 198.18.32.12:0/1015918 --> 198.18.32.13:6815/28535 -- osd_op(client.4362.0:206 .dir.default.4327.1 [call rgw.bucket_complete_op] 10.ffda47da ack+ondisk+write e85) v4 -- ?+0 0x7f81a8094d30 con 0x7f82400023c0
7f8224dfa700 20 -- 198.18.32.12:0/1015918 submit_message osd_op(client.4362.0:206 .dir.default.4327.1 [call rgw.bucket_complete_op] 10.ffda47da ack+ondisk+write e85) v4 remote, 198.18.32.13:6815/28535, have pipe.
7f8224dfa700  0 WARNING: set_req_state_err err_no=95 resorting to 500
7f8224dfa700  2 req 7:0.072198:s3:PUT /xyzxyzxyz:put_obj:http status=500
7f8224dfa700  1 ====== req done req=0x7f823000f880 http_status=500 ======

Thanks,
-mike
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux