Re: civetweb threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mustafa,

I am not positive about the results you posted here(200MB/s ==>
1200MB/s, by just increasing number of 'rgw_thread_pool_size'). These
are results I observed on my setup:

1. Workload : 100% Read, Object_size=1MB, ClientWorkers: 512,
rgw_object_chunk_size= 1048576, rgw_thread_pool_size=128

General Report

Op-Type    Op-Count       Byte-Count      Avg-ResTime   Avg-ProTime
  Throughput        Bandwidth    Succ-Ratio
Read         515.99 kops   515.99 GB         74.37 ms          72.2 ms
           1721.04 op/s      1.72 GB/S     99.33%

ResTime (RT) Details
Op-Type     60%-RT    80%-RT      90%-RT   95%-RT    99%-RT        100%-RT
 read          < 30 ms   < 80 ms        < 200 ms       < 290 ms
  < 670 ms              < 6,530 ms


2.  Workload : 100% Read, Object_size=1MB, ClientWorkers: 512,
rgw_object_chunk_size= 1048576, rgw_thread_pool_size=512

General Report

Op-Type    Op-Count         Byte-Count    Avg-ResTime
Avg-ProcTime     Throughput    Bandwidth    Succ-Ratio
read          521.21 kops      521.21 GB      294.17 ms
291.7 ms           1740.46 op/s  1.74 GB/S      100%


ResTime (RT) Details
Op-Type     60%-RT     80%-RT      90%-RT      95%-RT     99%-RT        100%-RT
read          < 280 ms     < 360 ms     < 450 ms   < 570 ms  <1,040 ms
  <3,790 ms


You can checkout more setup details
http://pushpeshsharma.blogspot.in/2014/11/openstack-swift-vs-ceph-rgw-read.html
 .

 It is certainly possible I am missing some other configs, please
enlighten me.

On Sun, Nov 16, 2014 at 10:41 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
> On Sun, Nov 16, 2014 at 2:51 AM, Mustafa Muhammad
> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>> On Sat, Nov 15, 2014 at 7:18 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
>>> On Sat, Nov 15, 2014 at 3:55 AM, Mustafa Muhammad
>>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>>> On Sat, Nov 15, 2014 at 10:28 AM, Mustafa Muhammad
>>>> <mustafaa.alhamdaani@xxxxxxxxx> wrote:
>>>>> Hi,
>>>>> I am using civetweb in my radosgw, if I use "rgw thread pool size"
>>>>> that is more than 1024, civetweb doesn't work.
>>>>> e.g.
>>>>> rgw thread pool size = 1024
>>>>> rgw frontends = "civetweb port=80"
>>>>>
>>>>> #ps aux -L | grep rados | wc -l
>>>>> 1096
>>>>>
>>>>> everything works fine
>>>>>
>>>>>
>>>>> If I use:
>>>>> rgw thread pool size = 1025
>>>>> rgw frontends = "civetweb port=80"
>>>>>
>>>>> # ps aux -L | grep rados | wc -l
>>>>> 43
>>>>>
>>>>> And http server is not listening.
>>>>>
>>>>> If I don't use civetweb:
>>>>> rgw thread pool size = 10240
>>>>>
>>>>> # ps aux -L | grep rados | wc -l
>>>>> 10278
>>>>>
>>>>> Regards
>>>>>
>>>>> Mustafa Muhammad
>>>>
>>>> I found the problem, it is hardcoded here:
>>>> https://github.com/ceph/civetweb/blob/master/src/civetweb.c
>>>> as:
>>>> #define MAX_WORKER_THREADS 1024
>>>>
>>>> I increased it to 20480 an compiled from source, problem solved.
>>>> I should we make a patch, right?
>>>
>>> Please do, preferably a github pull request. Also, if you could open a
>>> ceph tracker issue with the specific would be great.
>>
>> I wanted to do create a github pull request with this value set to
>> 20480, but thought I should ask if you want it to be hardcoded, I
>> understood that 'rgw_thread_pool_size' maps to 'num_op_thread' in
>> civetweb, is there a hardcoded max for rgw_thread_pool_size so I can
>> set the same for MAX_WORKER_THREADS? What do you suggest?
>
> I don't think there's a hard coded max for the rgw thread pool size. A
> configurable setting would be nicer, but I'm not really tied into it,
> depending on what are the ramifications. Whichever route you're taking
> it'd be nice to get it upstream to civetweb, so it needs to make sense
> in that context.
>
> Yehuda
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
-Pushpesh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux