Looks like you ran out of memory on the system(possibly a Directory
Server memory leak?) Was there anything in the Directory Server
errors log?
What version of 389 are you using? rpm -qa | grep 389-ds-base
You should monitor the 389 process and see if it continues to grow
day after day. Now, when you first start up the server it will take
a while until all the caches are filled, etc. So the memory will
grow at first, but then it should level off and not grow
significantly. There is always memory fragmentation, so the process
will grow in size, but it should be very minimal. If the process
size does continue to grow(significantly), then we will ask you to
run the server under valgrind to gather memory leak debug info.
To speed up the cache being fully primed you can do something like
this on all the suffixes you have configured:
ldapsearch -D "cn=directory manager" -W -b "<your suffix>"
objectclass=top > /dev/null
Mark
On 02/04/2015 01:20 PM, ghiureai wrote:
Hi List,
After succesfully running with 389-DS in production for 3 months
, we had DS crashed this am , see OS errorlog for details, there
are no erros in DS . I would like to know if there are any DS cfg
for memory garbage collection etc
my OS :Linux 2.6.32-431.el6.x86_64 #1 SMP Thu Nov 21 13:35:52 CST
2013 x86_64 x86_64 x86_64 GNU/Linux
Out of memory: Kill process 2090 (ns-slapd)
score 954 or sacrifice child
Feb 3 04:53:12 proc5-01 kernel: Killed process 2090, UID 500,
(ns-slapd) total-vm:27657260kB, anon-rss:15313560kB,
file-rss:268kB
Pid: 2228, comm: puppetd Not tainted 2.6.32-431.el6.x86_64 #1
Feb 3 04:53:11 proc5-01 kernel: Call Trace:
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff810d05b1>] ?
cpuset_print_task_mems_allowed+0x91/0xb0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81122960>] ?
dump_header+0x90/0x1b0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8122798c>] ?
security_real_capable_noaudit+0x3c/0x70
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81122de2>] ?
oom_kill_process+0x82/0x2a0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81122d21>] ?
select_bad_process+0xe1/0x120
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81123220>] ?
out_of_memory+0x220/0x3c0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8112fb3c>] ?
__alloc_pages_nodemask+0x8ac/0x8d0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81167b9a>] ?
alloc_pages_vma+0x9a/0x150
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8114980d>] ?
do_wp_page+0xfd/0x920
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8114a499>] ?
__do_fault+0x469/0x530
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8114a82d>] ?
handle_pte_fault+0x2cd/0xb00
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8104eeb7>] ?
pte_alloc_one+0x37/0x50
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8114b28a>] ?
handle_mm_fault+0x22a/0x300
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8104a8d8>] ?
__do_page_fault+0x138/0x480
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81282571>] ?
cpumask_any_but+0x31/0x50
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff81150240>] ?
unmap_region+0x110/0x130
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8114e3ce>] ?
remove_vma+0x6e/0x90
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8152d45e>] ?
do_page_fault+0x3e/0xa0
Feb 3 04:53:11 proc5-01 kernel: [<ffffffff8152a815>] ?
page_fault+0x25/0x30
Feb 3 04:53:11 proc5-01 kernel: Mem-Info:
Feb 3 04:53:11 proc5-01 kernel: Node 0 DMA per-cpu:
Feb 3 04:53:11 proc5-01 kernel: CPU 0: hi: 0, btch: 1
usd: 0
Feb 3 04:53:11 proc5-01 kernel: CPU 1: hi: 0, btch: 1
usd: 0
Feb 3 04:53:11 proc5-01 kernel: CPU 2: hi: 0, btch: 1
usd: 0
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
|
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users