Re: civetweb threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 17, 2014 at 3:03 PM, pushpesh sharma <pushpesh.eck@xxxxxxxxx> wrote:
> Mustafa,
>
> I am not positive about the results you posted here(200MB/s ==> 1200MB/s, by
> just increasing number of 'rgw_thread_pool_size'). These are results I
> observed on my setup:
>
> 1. Workload : 100% Read, Object_size=1MB, ClientWorkers: 512,
> rgw_object_chunk_size= 1048576, rgw_thread_pool_size=128
>
> General Report
>
> Op-Type    Op-Count     Byte-Count              Avg-Res  TimeAvg-Proc
> TimeThroughput   Bandwidth    Succ-Ratio
> Read         515.99 kops 515.99 GB  74.37 ms 72.2 ms 1721.04 op/s 1.72 GB/S
> 99.33%
>
> ResTime (RT) Details
> Op-Type     60%-RT    80%-RT      90%-RT   95%-RT    99%-RT        100%-RT
>  read          < 30 ms < 80 ms < 200 ms < 290 ms < 670 ms  < 6,530 ms
>
>
> 2.  Workload : 100% Read, Object_size=1MB, ClientWorkers: 512,
> rgw_object_chunk_size= 1048576, rgw_thread_pool_size=512
>
> General Report
>
> Op-Type    Op-Count         Byte-Count    Avg-Res       TimeAvg-Proc
> TimeThroughput    Bandwidth    Succ-Ratio
> read          521.21 kops 521.21 GB 294.17 ms 291.7 ms 1740.46 op/s 1.74
> GB/S 100%
>
>
> ResTime (RT) Details
> Op-Type     60%-RT     80%-RT      90%-RT      95%-RT     99%-RT
> 100%-RT
> read          < 280 ms     < 360 ms     < 450 ms   < 570 ms  <1,040 ms
> <3,790 ms
>
>
> You can checkout more setup details here  . It is certainly possible I am
> missing some other configs, please enlighten me.
>

In my case, I have a very large files (multiple gigs), I have a very
large number of connections, so I hit the "MAX_WORKER_THREADS" that is
1024 in civetweb code, so increasing the "rgw thread pool size" for me
was to increase "num_op_thread" to handle more than 1024 connection, I
could handle more than 7000 connections and I got the 1200 MB/s

In your blog, you said you used civetweb in default configuration
"‘num_op_thread’, which is set to 128", but the number is mapped from
rgw_thread_pool_size, so when you have more clients, they will wait.

Regards
Mustafa

>
> On Sun, Nov 16, 2014 at 4:33 PM, Mustafa Muhammad
> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>> On Sun, Nov 16, 2014 at 7:57 AM, pushpesh sharma <pushpesh.eck@xxxxxxxxx>
>> wrote:
>>> Yehuda,
>>>
>>> I believe it would not be wise to increase the value of this
>>> parameter; till the underlying threading model of RGW is fixed.
>>> I tried with various  'rgw_thread_pool_size' (128,256,512,1024), and
>>> didn't observe any better throughput , instead response time
>>> increased. Which means RGW is not able to handled increased
>>> concurrency.
>> I needed to increase this number to increase the number of concurrent
>> connections to civetweb, I need high number of threads (>7000
>> connections, large files)
>> I had my throughput going from 200 MB/s to 1200 MB/s (10G interface is
>> full, I could've gone higher)  due to increasing civetweb connections.
>>
>>> I have another observation regarding 'rgw_max _chunk_size'. I was
>>> doing some experiment with this parameter. I have seen the following
>>> behavior on a test deployment and then verified on development
>>> environment :-
>>>
>>> 1. Based on the value we set for 'rgw_max_chunk_size', every Swift
>>> object gets sliced in to equal sized chunks and each chunk got created
>>> as a RAODS Object in the pool (.rgw.buckets)
>>>
>>> 2. While reading the same Swift Object, sum of all RADOS objects
>>> provided.
>>>
>>> 3. However setting this 'rgw_max_chunk_size' value up to 4MB only
>>> works after which GET request will fail and only the first RADOS
>>> object will be returned.
>>>
>>> #######################
>>> $./ceph --admin-daemon out/client.admin.9660.asok config set
>>> rgw_max_chunk_size 4195000
>>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>>> { "success": "rgw_max_chunk_size = '4195000' "}
>>>
>>>
>>> swift -A http://localhost:8000/auth -U tester:testing -K asdf upload
>>> my4+MB_chunked ceph-0.85.tar.gz
>>> ceph-0.85.tar.gz
>>> ceph@Ubuntu14:~$ cd copy/
>>> ceph@Ubuntu14:~/copy$ swift -A http://localhost:8000/auth -U
>>> tester:testing -K asdf download my4+MB_chunked ceph-0.85.tar.gz
>>> ceph-0.85.tar.gz: md5sum != etag, ea27d35584bae391e9f3e67c86849a66 !=
>>> fd9237fa01803398bd8012c4d59975d8
>>> ceph-0.85.tar.gz [auth 0.021s, headers 0.040s, total 30.045s, 0.140 MB/s]
>>> ceph-0.85.tar.gz: read_length != content_length, 4194304 != 7384934
>>> ######################
>>>
>>> Is this behavior sane? Why the Swift Object is chunked  by default
>>> based on this value. Should the decision to chunk the object be coming
>>> from end-user? (Like Native Swift)
>>>
>>> On Sat, Nov 15, 2014 at 9:48 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
>>>> On Sat, Nov 15, 2014 at 3:55 AM, Mustafa Muhammad
>>>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>>>> On Sat, Nov 15, 2014 at 10:28 AM, Mustafa Muhammad
>>>>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>>>>> Hi,
>>>>>> I am using civetweb in my radosgw, if I use "rgw thread pool size"
>>>>>> that is more than 1024, civetweb doesn't work.
>>>>>> e.g.
>>>>>> rgw thread pool size = 1024
>>>>>> rgw frontends = "civetweb port=80"
>>>>>>
>>>>>> #ps aux -L | grep rados | wc -l
>>>>>> 1096
>>>>>>
>>>>>> everything works fine
>>>>>>
>>>>>>
>>>>>> If I use:
>>>>>> rgw thread pool size = 1025
>>>>>> rgw frontends = "civetweb port=80"
>>>>>>
>>>>>> # ps aux -L | grep rados | wc -l
>>>>>> 43
>>>>>>
>>>>>> And http server is not listening.
>>>>>>
>>>>>> If I don't use civetweb:
>>>>>> rgw thread pool size = 10240
>>>>>>
>>>>>> # ps aux -L | grep rados | wc -l
>>>>>> 10278
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Mustafa Muhammad
>>>>>
>>>>> I found the problem, it is hardcoded here:
>>>>> https://github.com/ceph/civetweb/blob/master/src/civetweb.c
>>>>> as:
>>>>> #define MAX_WORKER_THREADS 1024
>>>>>
>>>>> I increased it to 20480 an compiled from source, problem solved.
>>>>> I should we make a patch, right?
>>>>
>>>> Please do, preferably a github pull request. Also, if you could open a
>>>> ceph tracker issue with the specific would be great.
>>>>
>>>> Thanks,
>>>> Yehuda
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>
>>> --
>>> -Pushpesh
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> -Pushpesh
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux