On Mon, Sep 02, 2024 at 12:37:48PM GMT, Petr Špaček wrote: > On 30. 08. 24 19:00, Petr Špaček wrote: > > On 30. 08. 24 17:04, Pedro Falcato wrote: > > > On Fri, Aug 30, 2024 at 04:28:33PM GMT, Petr Špaček wrote: > > > > > > Can you get us a dump of the /proc/<pid>/maps? It'd be interesting > > > to see how > > > exactly you're hitting this. > > https://users.isc.org/~pspacek/sf1717/bind-9.18.28-jemalloc-maps.xz > > RSS was about 8.9 GB when the snapshot was taken. > > I'm curious about your conclusions from this data. Thank you for your time! I'm not a jemalloc expert (maybe they could chime in) but a quick look suggests jemalloc is poking _a lot_ of holes into your memory map (with munmap). There were theories regarding jemalloc guard pages, but these don't even seem to be it. E.g: 7fa95d392000-7fa95d4ab000 rw-p 00000000 00:00 0 7fa95d4ac000-7fa95d4b7000 rw-p 00000000 00:00 0 7fa95d4b8000-7fa95d4dd000 rw-p 00000000 00:00 0 7fa95d4de000-7fa95d4f2000 rw-p 00000000 00:00 0 7fa95d4f3000-7fa95d4f9000 rw-p 00000000 00:00 0 7fa95d4fa000-7fa95d512000 rw-p 00000000 00:00 0 7fa95d513000-7fa95d53d000 rw-p 00000000 00:00 0 7fa95d53e000-7fa95d555000 rw-p 00000000 00:00 0 7fa95d556000-7fa95d5ab000 rw-p 00000000 00:00 0 7fa95d5ac000-7fa95d5b4000 rw-p 00000000 00:00 0 Where we have about a one page gap between every vma. Either jemalloc is a big fan of munmap on free(), or this is some novel guard page technique I've never seen before :) MADV_DONTNEED should work just fine on systems with overcommit on. -- Pedro