Re: [389-users] 389 pauses every 5 minutes under load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/12/2011 04:15 PM, Justin Gronfur wrote:
> On 10/12/2011 03:07 PM, Rich Megginson wrote:
>> This is helpful.  Any chance you could paste the entire stack 
>> traces?  For example,
>> #0  0x0000003735c3c868 in slapi_get_mapping_tree_node_by_dn@plt () 
>> from /usr/lib64/dirsrv/libslapd.so.0
>> #0  0x0000003735c4ad38 in slapi_dn_normalize_ext () from 
>> /usr/lib64/dirsrv/libslapd.so.0
>> etc. are nice to have, but much better would be the entire stack 
>> traces of these calls so we can see where they are called from.
>
> Attached is a set of full gstack dumps taken at 1 second intervals.  
> The majority of it consists of select/poll/etc... calls that I 
> filtered out last time, but left in this time for context.
Thanks.

The select/poll are the server basically idle, waiting on a condition 
variable for new work to perform.

You can eliminate many of these by decreasing your cn=config 
nsslapd-threadnum setting.  The default is 30, but you may find better 
performance by setting to somewhere around 2 times the number of 
cpus/cores you have on your machine (but at least 8).

Do you know if any of these come from a period of time during which the 
server is consuming a lot of CPU?
>
> One of my coworkers wanted me to mention that we use long running ldap 
> connections (bound to a user's session for the duration of that 
> session unless the session is replicated to another jvm instance).  I 
> know that isn't really standard, but I don't think that should cause 
> these problems.
No, should not be a problem.  And it is standard - many apps do this 
(e.g. a web service that uses ldap for auth will not want to open/close 
a connection for every single user - it will typically use a connection 
pool of already open and possibly idle connections).
>
> Tomorrow I'm planning on writing a forking bash process to push the 
> exact same requests under the exact same load at 389 to determine if 
> it is a problem caused by the java code or container itself (by 
> eliminating them completely).  I'll keep you posted on the result of 
> this.
>
> Thanks,
> Justin

--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users


[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux