Re: Problem with userRoot cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 17 Jan 2019, at 02:38, Mark Reynolds <mreynolds@xxxxxxxxxx> wrote:
> 
> Hey Paul,
> 
> On 1/16/19 10:14 AM, Paul Whitney wrote:
>> We were on version:  389-ds-base-1.3.7.5-24.
> What OS?
> 
> 
>> 
>> The nsslapd-cache-autosize was set to 0.  We apply our own values.
>> 
>> To keep us afloat we have been forced to enable nsslapd-cache-autosize.  However, our performance has degraded in response times and feel we are not able to better allocate/customize the cache settings as they relate to each database.
>> 
>> System:  Virtual Machine
>>                 8-CPU
>>                64GB RAM (of which 35G is free)
>> 
>> Two databases in the slapd instance:
>> userRoot = 21G
>> groupRoot = 1.8G
>> 
>> When we try to allocate anything above 50% for nsslapd-cache-autosize, the service fails to start stating values cannot exceed 100.
>> 
>> When we disable nsslapd-cache-autosize, and punch in our numbers:
>> 
>> nsslapd-cache-autosize = 0
>> nsslapd-dbcachesize = 1073741824
>> nsslapd-cachememsize = 2147483648 (groupRoot) and 23622320128 (userRoot)
>> 
>> Service overwrites our settings and sets both databases to 2147483648. While that is ok for groupRoot, it is not for userRoot.
>> 
>> Based on this information, is there a way/recommendation to:
>> 	• To force these values we enter to "stick"
> Well setting autosize to zero and explicitly setting the cache attributes is all it should take for it to stick
> 
>> 	• How can I better configure the auto cache sizing of these entries?
> Start here:  http://www.port389.org/docs/389ds/design/autotuning.html   (checkout the manual tuning section)
> 
> You can tune the autosizing to use more cache, but it is uniform across all backends:  groupRoot and userRoot would use the same values.
> 
> 
>> 
>> I would offer logs if I could, but we cannot get them off the system.  It is hosted in a disconnected environment.
> The logs would say why the server thinks it needs to resize your caches (a bug?), but it sounds like autosizing is not the issue here since it is set to zero.  I'm not sure what else I can offer up without more log information.  I suspect the server is not properly detecting the 64gigs of memory, and thinks you have much less, which is why it's downsizing the cache values.  This is all speculation without being able to look at the errors log (during startup).  I find it very odd you can not get access to your own logs, it's such a vital part of the server, you should really get that addressed or else we can't really help you :(

The server is likely detecting the ram just fine, the autotune uses a super conservative allocation due to fears over memory fragmentation issues that existed in the past. As a result I think we would read the system ram capacity and only allocate 10 - 20% of the memory to the server, then within the server we allocate that between the dbcache up to max 1.5G, and then all remaining to the entry cache, divided equally to each backend. 

So if you have 64GB of ram, and only 50GB is free, we’ll try to use 10-20% of 50GB, which is 5 - 10GB. From there we would likely carve out (up to) 1.5G for dbcache leaving 3.5G for the backends, split between - 1.75GB each.

There are a few ways to proceed:

If you want to manually tune, set nsslapd-cache-autosize to 0, and set a cachememsize. Similar for db.

If you want to keep autotuning (or have issues disabling it), set nsslapd-cache-autosize to a higher value. This is a percentage of ram of your system to allocate, but we’ll still “use more” than this. I’d say for you, a value like 40% could be safe, because that would give you ~24GB of ram, with about 11GB per backend. (depending on your free memory).

If you do access your logs, there is an autotuning decision output early in startup where the subsystem logs why and how it make the decisions it did. It should look like:

[10/Jan/2019:13:10:12.287197304 +1000] - NOTICE - ldbm_back_start - found 6100060k physical memory
[10/Jan/2019:13:10:12.288431248 +1000] - NOTICE - ldbm_back_start - found 5380748k available
[10/Jan/2019:13:10:12.289558411 +1000] - NOTICE - ldbm_back_start - cache autosizing: db cache: 152501k
[10/Jan/2019:13:10:12.290579629 +1000] - NOTICE - ldbm_back_start - total cache size: 124929228 B;



> 
> Regards,
> Mark
>> 
>> Paul M. Whitney, RHCSA, CISSP
>> Chesapeake IT Consulting, Inc.
>> 2680 Tobacco Rd
>> Chesapeake Beach, MD 20732 
>> 
>> Work: 443-492-2872
>> Cell:   410.493.9448
>> Email: paul.whitney@xxxxxxxxxxxxxxxxx
>> CONFIDENTIALITY NOTICE 
>> The information contained in this facsimile or electronic message is confidential information intended for the use of the individual or entity named above. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this facsimile message to the intended recipient, you are hereby notified that any dissemination, or copying of this communication is strictly prohibited. If this message contains non-public personal information about any consumer or customer of the sender or intended recipient, you are further prohibited under penalty of law from using or disclosing the information to any third party by provisions of the federal Gramm-Leach-Bliley Act. If you have received this facsimile or electronic message in error, please immediately notify us by telephone and return or destroy the original message to assure that it is not read, copied, or distributed by others.
>> 
>> From: William Brown <wbrown@xxxxxxx>
>> Sent: Tuesday, January 15, 2019 7:22:18 PM
>> To: 389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> Cc: Paul Whitney
>> Subject: Re: [389-users] Problem with userRoot cache
>>  
>> 
>> 
>> > On 16 Jan 2019, at 06:49, Mark Reynolds <mreynolds@xxxxxxxxxx> wrote:
>> > 
>> > What version were you previously on?
>> > 
>> > Sounds like an issue with autocache sizing.  The errors log might give more info about why its being reset.  
>> > 
>> > Also check if "nsslapd-cache-autosize" is set under "cn=config,cn=ldbm database,cn=plugins,cn=config".  If it is, set it to zero to stop the autosizing.
>> 
>> Certainly is sounds like autosizing is still enabled here and is just resetting your values on you. 
>> 
>> > 
>> > 
>> > On 1/15/19 3:42 PM, Paul Whitney wrote:
>> >> We recently updated to 389-ds-base-1.3.8.4-18.  I am not sure I can attribute this issue to this update since we are now just discovering it.  But setting the nsslapd-cachememsize is reverting to a default value of 2GB.  I have attempted to restore the value through the console and restarting the instance. I have also tried by stopping instance and manually editing the dse.ldif file.  In both cases, the value is replaced with 2GB value.
>> >> 
>> >> Any suggestions?
>> >> 
>> >> Paul M. Whitney
>> >> RHCSA, VCP, CISSP, Security+
>> >> Chesapeake IT Consulting, Inc.
>> >> 2680 Tobacco Rd
>> >> Chesapeake Beach, MD 20732 
>> >> 
>> >> Work: 443-492-2872
>> >> Cell:   410.493.9448
>> >> Email: paul.whitney@xxxxxxxxxxxxxxxxx
>> >> CONFIDENTIALITY NOTICE 
>> >> The information contained in this facsimile or electronic message is confidential information intended for the use of the individual or entity named above. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this facsimile message to the intended recipient, you are hereby notified that any dissemination, or copying of this communication is strictly prohibited. If this message contains non-public personal information about any consumer or customer of the sender or intended recipient, you are further prohibited under penalty of law from using or disclosing the information to any third party by provisions of the federal Gramm-Leach-Bliley Act. If you have received this facsimile or electronic message in error, please immediately notify us by telephone and return or destroy the original message to assure that it is not read, copied, or distributed by others.
>> >> 
>> >> 
>> >> 
>> >> _______________________________________________
>> >> 389-users mailing list -- 
>> >> 389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> >> 
>> >> To unsubscribe send an email to 
>> >> 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
>> >> 
>> >> Fedora Code of Conduct: 
>> >> https://getfedora.org/code-of-conduct.html
>> >> 
>> >> List Guidelines: 
>> >> https://fedoraproject.org/wiki/Mailing_list_guidelines
>> >> 
>> >> List Archives: 
>> >> https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> > _______________________________________________
>> > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> > To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
>> > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
>> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
>> > List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> 
>> —
>> Sincerely,
>> 
>> William Brown
>> Software Engineer, 389 Directory Server
>> SUSE Labs
>> 
>> 
>> 
>> _______________________________________________
>> 389-users mailing list -- 
>> 389-users@xxxxxxxxxxxxxxxxxxxxxxx
>> 
>> To unsubscribe send an email to 
>> 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
>> 
>> Fedora Code of Conduct: 
>> https://getfedora.org/code-of-conduct.html
>> 
>> List Guidelines: 
>> https://fedoraproject.org/wiki/Mailing_list_guidelines
>> 
>> List Archives: 
>> https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx

—
Sincerely,

William Brown
Software Engineer, 389 Directory Server
SUSE Labs
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux