Re: civetweb threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Nov 15, 2014 at 8:57 PM, pushpesh sharma <pushpesh.eck@xxxxxxxxx> wrote:
> Yehuda,
>
> I believe it would not be wise to increase the value of this
> parameter; till the underlying threading model of RGW is fixed.
> I tried with various  'rgw_thread_pool_size' (128,256,512,1024), and
> didn't observe any better throughput , instead response time
> increased. Which means RGW is not able to handled increased
> concurrency.
>
> I have another observation regarding 'rgw_max _chunk_size'. I was
> doing some experiment with this parameter. I have seen the following
> behavior on a test deployment and then verified on development
> environment :-
>
> 1. Based on the value we set for 'rgw_max_chunk_size', every Swift
> object gets sliced in to equal sized chunks and each chunk got created
> as a RAODS Object in the pool (.rgw.buckets)
>
> 2. While reading the same Swift Object, sum of all RADOS objects provided.
>
> 3. However setting this 'rgw_max_chunk_size' value up to 4MB only
> works after which GET request will fail and only the first RADOS
> object will be returned.
>
> #######################
> $./ceph --admin-daemon out/client.admin.9660.asok config set
> rgw_max_chunk_size 4195000
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> { "success": "rgw_max_chunk_size = '4195000' "}
>
>
> swift -A http://localhost:8000/auth -U tester:testing -K asdf upload
> my4+MB_chunked ceph-0.85.tar.gz
> ceph-0.85.tar.gz
> ceph@Ubuntu14:~$ cd copy/
> ceph@Ubuntu14:~/copy$ swift -A http://localhost:8000/auth -U
> tester:testing -K asdf download my4+MB_chunked ceph-0.85.tar.gz
> ceph-0.85.tar.gz: md5sum != etag, ea27d35584bae391e9f3e67c86849a66 !=
> fd9237fa01803398bd8012c4d59975d8
> ceph-0.85.tar.gz [auth 0.021s, headers 0.040s, total 30.045s, 0.140 MB/s]
> ceph-0.85.tar.gz: read_length != content_length, 4194304 != 7384934
> ######################
>
> Is this behavior sane? Why the Swift Object is chunked  by default

I need to see the logs for it to maybe determine what was going on.
There's another param that limits max request size
(rgw_get_obj_max_req_size), so you probably hit this one. Note that
it's not advisable to have the chunk size too big.

> based on this value. Should the decision to chunk the object be coming
> from end-user? (Like Native Swift)

Does native swift even support striping? Not having striping or having
variable stripe sizes could have severe impact on data balancing.
Also, the gateway insures atomicity of certain operations by limiting
the object head's size.

Yehuda

>
> On Sat, Nov 15, 2014 at 9:48 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
>> On Sat, Nov 15, 2014 at 3:55 AM, Mustafa Muhammad
>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>> On Sat, Nov 15, 2014 at 10:28 AM, Mustafa Muhammad
>>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>>> Hi,
>>>> I am using civetweb in my radosgw, if I use "rgw thread pool size"
>>>> that is more than 1024, civetweb doesn't work.
>>>> e.g.
>>>> rgw thread pool size = 1024
>>>> rgw frontends = "civetweb port=80"
>>>>
>>>> #ps aux -L | grep rados | wc -l
>>>> 1096
>>>>
>>>> everything works fine
>>>>
>>>>
>>>> If I use:
>>>> rgw thread pool size = 1025
>>>> rgw frontends = "civetweb port=80"
>>>>
>>>> # ps aux -L | grep rados | wc -l
>>>> 43
>>>>
>>>> And http server is not listening.
>>>>
>>>> If I don't use civetweb:
>>>> rgw thread pool size = 10240
>>>>
>>>> # ps aux -L | grep rados | wc -l
>>>> 10278
>>>>
>>>> Regards
>>>>
>>>> Mustafa Muhammad
>>>
>>> I found the problem, it is hardcoded here:
>>> https://github.com/ceph/civetweb/blob/master/src/civetweb.c
>>> as:
>>> #define MAX_WORKER_THREADS 1024
>>>
>>> I increased it to 20480 an compiled from source, problem solved.
>>> I should we make a patch, right?
>>
>> Please do, preferably a github pull request. Also, if you could open a
>> ceph tracker issue with the specific would be great.
>>
>> Thanks,
>> Yehuda
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> -Pushpesh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux