Re: ec pools and radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, 27 Mar 2014, Yehuda Sadeh wrote:

On Wed, Mar 26, 2014 at 4:48 PM, Michael Nelson <mn+ceph-users@xxxxxxxx> wrote:
I am playing around with erasure coded pools on 0.78-348 (firefly) and am
attempting to enable EC on the .rgw.buckets pool for radosgw
(fresh install).

If I use a plain EC profile (no settings changed), uploads of various sizes
work fine and EC seems to be working based on how much space is
being used in the cluster. If I start playing with k or m values, multipart
uploads start failing (on the first chunk). I haven't seen issues with rados
put or rados bench on EC pools. I saw the same behavior on the official
v0.78 release.

I turned up verbose logging on OSDs and RGW and I don't see obvious errors.
Here is a snippet from the RGW log from the context/thread that failed:

7f8224dfa700  1 -- 198.18.32.12:0/1015918 --> 198.18.32.13:6815/28535 --
osd_op(client.4362.0:206 .dir.default.4327.1 [call rgw.bucket_complete_op]
10.ffda47da ack+ondisk+write e85) v4 -- ?+0 0x7f81a8094d30 con
0x7f82400023c0
7f8224dfa700 20 -- 198.18.32.12:0/1015918 submit_message
osd_op(client.4362.0:206 .dir.default.4327.1 [call rgw.bucket_complete_op]
10.ffda47da ack+ondisk+write e85) v4 remote, 198.18.32.13:6815/28535, have
pipe.
7f8224dfa700  0 WARNING: set_req_state_err err_no=95 resorting to 500
7f8224dfa700  2 req 7:0.072198:s3:PUT /xyzxyzxyz:put_obj:http status=500
7f8224dfa700  1 ====== req done req=0x7f823000f880 http_status=500 ======


There's an issue with EC and multipart uploiad, and a corresponding
ceph tracker issue was created (#7676). A fix for that was merged a
couple of days ago but did not make the cut to 0.78. The fix itself
requires setting up another replicated pool on the zone for holding
the relevant information that cannot be stored on an EC pool.

OK, make sense. If I am doing something like this:

ceph osd crush rule create-erasure ecruleset --debug-ms=20
ceph osd erasure-code-profile set myprofile ruleset-failure-domain=osd k=3 m=3
ceph osd pool create .rgw.buckets 400 400 erasure myprofile ecruleset

Will the replicated pool be created automatically like the other pools are?

Thanks,
-mike
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux