In the phase with high cpu usage, could you run
a) top -H -p <pid>
to see if there are many threads competing for the cpu or one or two
occupying the cpu
b) pstack <pid>
to see what the threads are doing, sometimes pstack for the complete
process doesn't look meaningful, you can also run pstack <tpid> where
tpid is one of the threads consuming the cpu
You are on a VM with 2cpus, what is the real HW, there have been
problems with RHDS on machines with Numa architecture if the threads of
teh process have been distributed to different nodes. What was the HW
for SunDS ?
Ludwig
On 03/31/2014 05:34 PM, Steve Holden wrote:
Hi, folks
I'm hoping to use 389 DS to replace our ancient Sun DS 5.2 service.
I've hit a snag with my 389 development server; it's performance far worse
than the 10 year-old servers it's intended to replace.
Things looked promising: the old directory data has been imported (with
only minor changes), read requests perform reasonably well, and isolated
write requests are ok.
However, even after a small number (typically 6) of consecutive write requests
(basic attribute changes to a single entry, say) the ns-slapd process hits >100%
CPU (of 2 CPUs) and stays there for *at least* 10 seconds per update, and blocks
the client process attempting the update.
I can't see anything obvious in the performance counters or the logs to suggest
a problem. The updates are logged with "etime=0" in the access log.
I've tried enabling different log levels in the error log.
Is it normal for the Plugin level to show constant re-scanning of CoS templates?
I'd be very grateful for any suggestions of how I can go about tracing where the
Problem might be and how to resolve it...
Best wishes,
Steve
Details
The RHEL6.5 server is a VMware ESXi VM with 8GB RAM and 2x CPUs,
and is running the latest EPEL package for RHEL6 (v1.2.11.15-32).
(After a package upgrade a few weeks ago, I ran "setup-ds-admin.pl -u").
The directory contains in excess of 200,000 entries, and
its databases consume over 3.5GB on disk.
The userRoot database has therefore been configured with a 4GB cache
(and the general LDBM max cache is set at 6GB - though it's quite possible
I haven't understood how to set these correctly - I've tried smaller numbers of each).
The directory contains custom attributes, some of which are CoS,
and many of which have been indexed (AFAIK, all attributes have been re-indexed).
No replication has been configured so far.
___________________________________________________________
This email has been scanned by MessageLabs' Email Security
System on behalf of the University of Brighton.
For more information see http://www.brighton.ac.uk/is/spam/
___________________________________________________________
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users