On Fri, 3 May 2019 at 01:29, Mark Nelson <mnelson@xxxxxxxxxx> wrote: > On 5/2/19 11:46 AM, Igor Podlesny wrote: > > On Thu, 2 May 2019 at 05:02, Mark Nelson <mnelson@xxxxxxxxxx> wrote: > > [...] > >> FWIW, if you still have an OSD up with tcmalloc, it's probably worth > >> looking at the heap stats to see how much memory tcmalloc thinks it's > >> allocated vs how much RSS memory is being used by the process. It's > >> quite possible that there is memory that has been unmapped but that the > >> kernel can't (or has decided not yet to) reclaim. > >> Transparent huge pages can potentially have an effect here both with tcmalloc and with > >> jemalloc so it's not certain that switching the allocator will fix it entirely. > > Most likely wrong. -- Default kernel's settings in regards of THP are "madvise". > > None of tcmalloc or jemalloc would madvise() to make it happen. > > With fresh enough jemalloc you could have it, but it needs special > > malloc.conf'ing. > > > From one of our centos nodes with no special actions taken to change > THP settings (though it's possible it was inherited from something else): > > > $ cat /etc/redhat-release > CentOS Linux release 7.5.1804 (Core) > $ cat /sys/kernel/mm/transparent_hugepage/enabled > [always] madvise never "madvise" will enter direct reclaim like "always" but only for regions that are have used madvise(MADV_HUGEPAGE). This is the default behaviour. -- https://www.kernel.org/doc/Documentation/vm/transhuge.txt > And regarding madvise and alternate memory allocators: > https: [...] did you ever read any of it? one link's info: "By default jemalloc does not use huge pages for heap memory (there is opt.metadata_thp which uses THP for internal metadata though)" (and I've said > > None of tcmalloc or jemalloc would madvise() to make it happen. > > With fresh enough jemalloc you could have it, but it needs special > > malloc.conf'ing. before) -- End of message. Next message? _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com