_______________________________________________Thank you, Casey,
have you seen the reply by Thierry about probable causes?
Alexander B. Nazarenko, PhD
IAM Services | Technology Partner Services
Harvard University Information Technology
P: 617-496-7150 | M: 617-803-3851
From: Casey Feskens <cfeskens@xxxxxxxxxxxxxx>
Reply-To: "General discussion list for the 389 Directory server project." <389-users@xxxxxxxxxxxxxxxxxxxxxxx>
Date: Sunday, April 16, 2023 at 11:39 PM
To: "General discussion list for the 389 Directory server project." <389-users@xxxxxxxxxxxxxxxxxxxxxxx>
Subject: [389-users] Re: 389 DS memory growth
We’ve been experiencing similar memory growth. I’ve had to quadruple RAM on our ldap hosts, but things seem stable there. Still unsure what the cause is. Glad to hear at least that someone else is seeing the same issue, so I can perhaps rule out an environmental change.
On Sun, Apr 16, 2023 at 6:07 PM Nazarenko, Alexander <alexander_nazarenko@xxxxxxxxxxx> wrote:
Hello colleagues,
On March 22nd we updated the 389-ds-base.x86_64 and 389-ds-base-libs.x86_64 packages on our eight RHEL 7.9 production servers from version 1.3.10.2-17.el7_9 to version 1.3.11.1-1.el7_9. We also updated the kernel from kernel 3.10.0-1160.80.1.el7.x86_64 to kernel-3.10.0-1160.88.1.el7.x86_64 during the same update.
Approximately 12 days later, on April 3rd, all the hosts started exhibiting memory growth issues whereby the “slapd” process was using over 90% of the available system memory of 32GB, which was NOT happening for a couple of years prior to applying any of the available package updates on the systems.
Two of the eight hosts act as Primaries (formerly referred to as masters), while 6 of the hosts act as read-only replicas. Three of the read-only replicas are used by our authorization system while the other three read-only replicas are used by customer-based applications.
Currently we use system controls to restrict the memory usage.
My question is whether this is something that other users also experience, and what is the recommended way to stabilize the DS servers in this type of situation?
Thanks,
- Alex
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue--
---------------------------------------------
Casey Feskens <cfeskens@xxxxxxxxxxxxxx>
Director of Infrastructure Services
Willamette Integrated Technology Services
Willamette University, Salem, OR
Phone: (503) 370-6950
---------------------------------------------
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Casey Feskens <cfeskens@xxxxxxxxxxxxxx>
Director of Infrastructure Services
Willamette Integrated Technology Services
Willamette University, Salem, OR
Phone: (503) 370-6950
---------------------------------------------
_______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue