pjsip-perf memory size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi benny,
    When I view source code of pj memory pool, I found there is a
PJ_HAS_POOL_ALT_API for pool debug purpose.
    But compile fail if I define PJ_HAS_POOL_ALT_API and PJ_POOL_DEBUG in
config_site.h.

regards,
Gang

On Fri, Sep 26, 2008 at 11:41 AM, Gang Liu <gangban.lau at gmail.com> wrote:

>
> On Fri, Sep 26, 2008 at 7:20 AM, Benny Prijono <bennylp at pjsip.org> wrote:
>
>> On Thu, Sep 25, 2008 at 11:11 AM, Gang Liu <gangban.lau at gmail.com> wrote:
>>
>>> hi benny,
>>>   I redid test carefully today. The result still was strange. The memory
>>> wouldn't be released back to OS even after long time waiting.
>>> Below are some data I collected. Next time I will try to use valgrind to
>>> look what's happening of heap. My program using pjsip has the same issue.
>>> But valgrind didn't find memory leak of my program.
>>>
>>>
>>
>> I think this could be attributed to couple of things. When I run sipp
>> against pjsua, I notice that after a while there are few CONFIRMED calls in
>> pjsua, eventhough sipp has been paused for sometime. Although this is not
>> right, I don't think this alone explains the huge memory being used.
>>
>> The second thing, and I think this is the most probably cause of the
>> symptom, is probably memory caching by libc. You can search for this topic
>> on the net. With pjsip, all heap memory (except the heap allocated by third
>> party library such as speex, libsrtp, or portaudio) are allocated from the
>> caching pool, and with pjsua you can see the list of memory blocks used by
>> the caching pool with "dd" command. In my case, although the "dd" shows low
>> memory usage by the caching pool, the actual memory usage observed by "top"
>> is still high, so the caching must be done by something outside pjlib.
>>
>
> Yes, I already tried "dd" command before. And I also dump pjsip-perf pools.
> The dump info also show memory usage is low.
> But if I run the test all the day at more higher cps , pjsip-perf almost
> use 1.5G RAM and cause my linux run into memory swapping.
> So it is interested what's the reason. I will get back when I try valgrind.
>
> regards,
>  Gang
>
>
>
>>
>> Cheers
>>  Benny
>>
>>
>>
>>
>>>   1,  ####### pjsip compile #########
>>>       I copy config_site_sample.h to config_site.h. And then I define
>>> PJ_CONFIG_MAXIMUM_SPEED at that file, change
>>> PJ_IOQUEUE_MAX_HANDLES to 1000. I also set CACHING_POOL_SIZE to zero of
>>> pjsip-perf.c
>>>      ./configure;make dep;make
>>>
>>>   2,  ####### sipp as caller, pjsip-perf as callee, 200 cps  ############
>>>
>>> sipp -sf uac_rr.xml -p 5061 -l 50000 -r 200 -d 6000 -m 1000000 -s 2
>>> 192.168.0.233:25060
>>>
>>> -bash-3.00$ ./pjsip-perf-i686-pc-linux-gnu --method=INVITE
>>> --local-port=25060 --trying --ringing
>>> PJSIP Performance Measurement Tool v0.9.0-release
>>> (c)2006 pjsip.org
>>> pjsip-perf started in server-mode
>>> Receiving requests on the following URIs:
>>>   sip:0 at 192.168.0.233:25060    for stateless handling
>>>   sip:1 at 192.168.0.233:25060    for stateful handling
>>>   sip:2 at 192.168.0.233:25060    for call handling
>>> INVITE with non-matching user part will be handled call-statefully
>>> Press <ENTER> to quit
>>> Total(rate): stateless:0 (0/s), statefull:0 (0/s), call:13.3K (199/s)
>>>
>>>   memory when test running
>>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>> 23676 gang      18   0  156m 141m 1700 S  0.0  7.0   0:00.04
>>> pjsip-perf-i686
>>>
>>>    memory when 1 hour after 1 million calls finished
>>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>> 23676 gang      18   0  154m 140m 1700 S  0.0  6.9   0:00.24
>>> pjsip-perf-i686
>>>
>>> -bash-3.00$ ./pjsip-perf-i686-pc-linux-gnu --method=INVITE
>>> --local-port=25060 --trying --ringing
>>> PJSIP Performance Measurement Tool v0.9.0-release
>>> (c)2006 pjsip.org
>>> pjsip-perf started in server-mode
>>> Receiving requests on the following URIs:
>>>   sip:0 at 192.168.0.233:25060    for stateless handling
>>>   sip:1 at 192.168.0.233:25060    for stateful handling
>>>   sip:2 at 192.168.0.233:25060    for call handling
>>> INVITE with non-matching user part will be handled call-statefully
>>> Press <ENTER> to quit
>>> Total(rate): stateless:0 (0/s), statefull:0 (0/s), call:1.00M (0/s)
>>>  17:23:59.857   pjsip-perf.c Peak memory size: 151MB
>>> -bash-3.00$
>>>
>>>
>>> 3, ############ repeat the same test, but at 800 cps. #################
>>>
>>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>> 23827 gang      18   0  521m 506m 1700 S  0.0 25.0   2:42.07
>>> pjsip-perf-i686
>>>
>>> OS:
>>> -bash-3.00$ cat /etc/redhat-release
>>> CentOS release 4.7 (Final)
>>>
>>> GCC:
>>> -bash-3.00$ gcc -v
>>> Using built-in specs.
>>> Target: i686-pc-linux-gnu
>>> Configured with: ./configure
>>> Thread model: posix
>>> gcc version 4.1.1
>>>
>>> PJSIP:
>>> pjproject-0.9.0 release
>>>
>>>
>>>
>>> On Mon, Sep 22, 2008 at 7:31 AM, Benny Prijono <bennylp at pjsip.org>wrote:
>>>
>>>> On Fri, Sep 12, 2008 at 6:58 AM, Gang Liu <gangban.lau at gmail.com>wrote:
>>>>
>>>>> I know pjsip-perf is configured to cache up to 256 MB of memory.
>>>>>
>>>>> But real memory is much higher than this.
>>>>>
>>>>>
>>>> I don't think that's peculiar . If the application needs more memory,
>>>> then more memory will be allocated.
>>>>
>>>>
>>>> On Fri, Sep 12, 2008 at 1:20 PM, Gang Liu <gangban.lau at gmail.com>wrote:
>>>>
>>>>>  Hi,
>>>>>     I found the memory size after all call finished is mostly the
>>>>> same as calling.
>>>>>
>>>>>
>>>>
>>>> Bear in mind that it may take up to 32 seconds before SIP
>>>> calls/transactions are destroyed even after they're disconnected/completed.
>>>>
>>>>  -benny
>>>>
>>>>
>>>>
>>>>>      pjsip-pref is as a uas, sipp as a uac.
>>>>>
>>>>>
>>>>>    Is it normal?
>>>>>
>>>>> 23047 gang      18   0  310m 296m 1492 S    0 14.6   0:00.02
>>>>> pjsip-perf-i686
>>>>> 23048 gang      15   0  310m 296m 1492 S    0 14.6   0:37.56
>>>>> pjsip-perf-i686
>>>>> regards,
>>>>> Gang
>>>>>
>>>>
>>>>
>>>>> _______________________________________________
>>>>> Visit our blog: http://blog.pjsip.org
>>>>>
>>>>> pjsip mailing list
>>>>> pjsip at lists.pjsip.org
>>>>> http://lists.pjsip.org/mailman/listinfo/pjsip_lists.pjsip.org
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Visit our blog: http://blog.pjsip.org
>>>>
>>>> pjsip mailing list
>>>> pjsip at lists.pjsip.org
>>>> http://lists.pjsip.org/mailman/listinfo/pjsip_lists.pjsip.org
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Visit our blog: http://blog.pjsip.org
>>>
>>> pjsip mailing list
>>> pjsip at lists.pjsip.org
>>> http://lists.pjsip.org/mailman/listinfo/pjsip_lists.pjsip.org
>>>
>>>
>>
>> _______________________________________________
>> Visit our blog: http://blog.pjsip.org
>>
>> pjsip mailing list
>> pjsip at lists.pjsip.org
>> http://lists.pjsip.org/mailman/listinfo/pjsip_lists.pjsip.org
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/attachments/20081002/553e1029/attachment.html>


[Index of Archives]     [Asterisk Users]     [Asterisk App Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [Linux API]
  Powered by Linux