Re: Problem with userRoot cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul,


Okay I think I found the bug you are running into:


https://bugzilla.redhat.com/show_bug.cgi?id=1627512


https://pagure.io/389-ds-base/issue/49618


So sounds like you need to upgrade to:


389-ds-base-1.3.8.4-21 (RHEL/Centos 7.6)


Or build the upstream 1.3.7 server yourself using the commit found in the pagure ticket above.


Regards,

Mark


On 1/16/19 11:38 AM, Mark Reynolds wrote:

Hey Paul,


On 1/16/19 10:14 AM, Paul Whitney wrote:

We were on version:  389-ds-base-1.3.7.5-24.

What OS?




The nsslapd-cache-autosize was set to 0.  We apply our own values.


To keep us afloat we have been forced to enable nsslapd-cache-autosize.  However, our performance has degraded in response times and feel we are not able to better allocate/customize the cache settings as they relate to each database.


System:  Virtual Machine

                8-CPU

               64GB RAM (of which 35G is free)


Two databases in the slapd instance:

userRoot = 21G

groupRoot = 1.8G


When we try to allocate anything above 50% for nsslapd-cache-autosize, the service fails to start stating values cannot exceed 100.


When we disable nsslapd-cache-autosize, and punch in our numbers:


nsslapd-cache-autosize = 0

nsslapd-dbcachesize = 1073741824

nsslapd-cachememsize = 2147483648 (groupRoot) and 23622320128 (userRoot)


Service overwrites our settings and sets both databases to 2147483648. While that is ok for groupRoot, it is not for userRoot.


Based on this information, is there a way/recommendation to:

  1. To force these values we enter to "stick"

Well setting autosize to zero and explicitly setting the cache attributes is all it should take for it to stick


  1. How can I better configure the auto cache sizing of these entries?

Start here:  http://www.port389.org/docs/389ds/design/autotuning.html   (checkout the manual tuning section)


You can tune the autosizing to use more cache, but it is uniform across all backends:  groupRoot and userRoot would use the same values.




I would offer logs if I could, but we cannot get them off the system.  It is hosted in a disconnected environment.

The logs would say why the server thinks it needs to resize your caches (a bug?), but it sounds like autosizing is not the issue here since it is set to zero.  I'm not sure what else I can offer up without more log information.  I suspect the server is not properly detecting the 64gigs of memory, and thinks you have much less, which is why it's downsizing the cache values.  This is all speculation without being able to look at the errors log (during startup).  I find it very odd you can not get access to your own logs, it's such a vital part of the server, you should really get that addressed or else we can't really help you :(


Regards,

Mark


Paul M. Whitney, RHCSA, CISSP
Chesapeake IT Consulting, Inc.

2680 Tobacco Rd

Chesapeake Beach, MD 20732 


Work: 443-492-2872

Cell:   410.493.9448

Email: 
paul.whitney@xxxxxxxxxxxxxxxxx
CONFIDENTIALITY NOTICE 
The information contained in this facsimile or electronic message is confidential information intended for the use of the individual or entity named above. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this facsimile message to the intended recipient, you are hereby notified that any dissemination, or copying of this communication is strictly prohibited. If this message contains non-public personal information about any consumer or customer of the sender or intended recipient, you are further prohibited under penalty of law from using or disclosing the information to any third party by provisions of the federal Gramm-Leach-Bliley Act. If you have received this facsimile or electronic message in error, please immediately notify us by telephone and return or destroy the original message to assure that it is not read, copied, or distributed by others.


From: William Brown <wbrown@xxxxxxx>
Sent: Tuesday, January 15, 2019 7:22:18 PM
To: 389-users@xxxxxxxxxxxxxxxxxxxxxxx
Cc: Paul Whitney
Subject: Re: [389-users] Problem with userRoot cache
 


> On 16 Jan 2019, at 06:49, Mark Reynolds <mreynolds@xxxxxxxxxx> wrote:
>
> What version were you previously on?
>
> Sounds like an issue with autocache sizing.  The errors log might give more info about why its being reset. 
>
> Also check if "nsslapd-cache-autosize" is set under "cn=config,cn=ldbm database,cn=plugins,cn=config".  If it is, set it to zero to stop the autosizing.

Certainly is sounds like autosizing is still enabled here and is just resetting your values on you.

>
>
> On 1/15/19 3:42 PM, Paul Whitney wrote:
>> We recently updated to 389-ds-base-1.3.8.4-18.  I am not sure I can attribute this issue to this update since we are now just discovering it.  But setting the nsslapd-cachememsize is reverting to a default value of 2GB.  I have attempted to restore the value through the console and restarting the instance. I have also tried by stopping instance and manually editing the dse.ldif file.  In both cases, the value is replaced with 2GB value.
>>
>> Any suggestions?
>>
>> Paul M. Whitney
>> RHCSA, VCP, CISSP, Security+
>> Chesapeake IT Consulting, Inc.
>> 2680 Tobacco Rd
>> Chesapeake Beach, MD 20732
>>
>> Work: 443-492-2872
>> Cell:   410.493.9448
>> Email: paul.whitney@xxxxxxxxxxxxxxxxx
>> CONFIDENTIALITY NOTICE
>> The information contained in this facsimile or electronic message is confidential information intended for the use of the individual or entity named above. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this facsimile message to the intended recipient, you are hereby notified that any dissemination, or copying of this communication is strictly prohibited. If this message contains non-public personal information about any consumer or customer of the sender or intended recipient, you are further prohibited under penalty of law from using or disclosing the information to any third party by provisions of the federal Gramm-Leach-Bliley Act. If you have received this facsimile or electronic message in error, please immediately notify us by telephone and return or destroy the original message to assure that it is not read, copied, or distributed by others.
>>
>>
>>
>> _______________________________________________
>> 389-users mailing list --
>> 389-users@xxxxxxxxxxxxxxxxxxxxxxx
>>
>> To unsubscribe send an email to
>> 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
>>
>> Fedora Code of Conduct:
>> https://getfedora.org/code-of-conduct.html
>>
>> List Guidelines:
>> https://fedoraproject.org/wiki/Mailing_list_guidelines
>>
>> List Archives:
>> https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
> _______________________________________________
> 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx


Sincerely,

William Brown
Software Engineer, 389 Directory Server
SUSE Labs


_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx

_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux