Re: autosizing the cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2018-03-18 at 21:57 -0500, Sergei Gerasenko wrote:
> Thank you for the detailed response, William. That’s great info. You
> mentioned FreeIPA in passing and that’s actually what I use 389-ds
> for. You mentioned dogtag eating memory. You mean it has a memory
> leak or some other memory mismanagement issue? 

Dogtag is Java/Tomcat. It's well known for consuming large volumes of
ram!

I actually wrote the autotuning system specifically for FreeIPA since
it historically did no DS tuning. So I'm glad that you are finding it
useful! 

> 
> I have 125G of RAM on my systems. So, I can allocate quite a bit of
> RAM to the caches. My plan is to set nsslapd-cache-autosize to 20%
> and set the dncache size to 5G for all three backends. Does that
> sound reasonable to you? These are dedicated FreeIPA machines, so
> it’s the main consumer of the memory.

Sure, sounds reasonable to me - I'd want to see your database sizes to
make a complete assesment, but it seems pretty reasonable to me. 

> 
> Also, can I set the dncachesize once at the cn=config,cn=ldbm
> database,cn=plugins,cn=config level and have all the other backends
> inherit that? Would I have to remove that parameter from the other
> backends for it to become inherited?

Sadly there is no method to inherit cache sizes.

Additionally, check that cn=config,cn=ldbm ... is actually
d'B'cachesize not d'N'cachesize.

Hope that helps! 

> 
> Thanks again, William!
>   Sergei
>  
> > On Mar 18, 2018, at 8:06 PM, William Brown <william@xxxxxxxxxxxxx.a
> > u> wrote:
> > 
> > On Tue, 2018-03-13 at 21:10 -0500, Sergei Gerasenko wrote:
> > > Hi William,
> > > 
> > > With autosizing on I configured the changelog backend:
> > > 
> > > dn: cn=changelog,cn=ldbm database,cn=plugins,cn=config
> > > cn: changelog
> > > objectClass: top
> > > objectClass: extensibleObject
> > > objectClass: nsBackendInstance
> > > nsslapd-suffix: cn=changelog
> > > nsslapd-cachesize: -1
> > > nsslapd-cachememsize: 8858370048
> > > nsslapd-readonly: off
> > > nsslapd-require-index: off
> > > nsslapd-directory: /var/lib/dirsrv/slapd-CNVR-NET/db/changelog
> > > nsslapd-dncachememsize: 3000000000
> > > 
> > > Here’s the ldif:
> > > 
> > > dn: cn=changelog,cn=ldbm database,cn=plugins,cn=config
> > > changetype: modify
> > > replace: nsslapd-dncachememsize
> > > nsslapd-dncachememsize: 3000000000
> > > -
> > > replace: nsslapd-cachememsize
> > > nsslapd-cachememsize: 3000000000
> > > 
> > > The server refused to change nsslapd-cachememsize because of
> > > autosizing but nsslapd-dncachememsize did get changed. So it
> > > seems
> > > that I can still control nsslapd-dncachememsize? When I restart
> > > 389-
> > > ds, the 3G persists as well.
> > > 
> > > Does that make sense?
> > 
> > Yes it does.
> > 
> > > 
> > > Thank you for a quick response!
> > 
> > At the current point in time, dncachememsize is NOT controlled by
> > autosizing. There is some ideas in my head about how to improve
> > this,
> > but the issue is legacy. We have one control: autosize percentage.
> > But
> > this doesn't really represent a complete picture and story about
> > how we
> > size our backends and data.
> > 
> > We have to continue to support the current variables, so I can't
> > easily
> > change this from it's current form. 
> > 
> > *IF* I was given unlimited power (I never will be ;) ) I would
> > probably
> > make the interface something like:
> > 
> > 
> > 
> > backend-foo:
> > autosize-max-memory: <value in bytes/mb/gb>
> > entrycache-slice: A%
> > dncache-slice: B%
> > dbcache-slice: C%
> > ndncache-slice: D%
> > 
> > So how would this work? 
> > 
> > You have autosize max that is the "total ram" we want to use. Lets
> > say
> > 2GB. Each backend would be given it's own "limit". We'd try to
> > detect
> > some sane values on your system of course, but we can only do so
> > much
> > :) So say ... each backend gets 5-10% of system ram limit. (We have
> > to
> > be super conservative due to dogtag in freeipa which eats ram).
> > 
> > Then we slice that up. So entrycache-slice + dncache-slice +
> > dbcache-
> > slice + ndncache-slice = 100%. So by default we might do something
> > like:
> > 
> > entrycache-slice: 75
> > dbcache-slice: 10
> > dncache-slice: 10
> > ndncache-slice: 5
> > 
> > So this on a 1GB backend translates to:
> > 
> > 750MB of entrycache
> > 100MB of dncache
> > 100MB of dbcache
> > 50MB of ndncache
> > 
> > of course some tests would be needed to find the right slice values
> > by
> > default.
> > 
> > 
> > Then it's as simple as "if you want more from your backend" just up
> > the
> > one memlimit value and "everything" increases in a correct
> > proportional
> > rate. Rather than thinking about how to hand tune everything, we
> > make
> > informed design decisions, you just say "here's mem max". You have
> > control to say each backend gets a different value (say FreeIPA you
> > may
> > want 4GB to userRoot, 1GB to CA system backend). you still have
> > lots of
> > flexibility as an admin, but you don't have to worry about so many
> > moving interacting parts. 
> > 
> > 
> > Hope that helps,
> > 
> > > 
> > > Sergei
> > > 
> > > 
> > > > On Mar 13, 2018, at 7:40 PM, William Brown <william@blackhats.n
> > > > et.a
> > > > u> wrote:
> > > > 
> > > > If autosize > 0, we write a new entrychace/cachememsize every
> > > > start
> > > > up.
> > > > 
> > > > So you only need to set autosize to between 1 and 99 for it to
> > > > work.
> > > > 
> > > > There is some other logic in there to account for other
> > > > scenarioes
> > > > =
> > > > for example, if you set autosize to 0 AND you set the entry
> > > > cachesize
> > > > to 0, we'll autosize it at start up anyway. BUT if you have
> > > > autosize =
> > > > 0, and entry cachesize > 0, we won't touch it.
> > > 
> > > _______________________________________________
> > > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> > > To unsubscribe send an email to 389-users-leave@lists.fedoraproje
> > > ct.o
> > > rg
> > 
> > -- 
> > Thanks,
> > 
> > William Brown
> > _______________________________________________
> > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> > To unsubscribe send an email to 389-users-leave@lists.fedoraproject
> > .org
> 
> _______________________________________________
> 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.o
> rg
-- 
Thanks,

William Brown
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux