On Tue, Oct 22, 2019 at 03:28:00PM +0200, Michal Hocko wrote: > On Tue 22-10-19 15:22:06, Michal Hocko wrote: > > On Thu 17-10-19 17:28:04, Roman Gushchin wrote: > > [...] > > > Using a drgn* script I've got an estimation of slab utilization on > > > a number of machines running different production workloads. In most > > > cases it was between 45% and 65%, and the best number I've seen was > > > around 85%. Turning kmem accounting off brings it to high 90s. Also > > > it brings back 30-50% of slab memory. It means that the real price > > > of the existing slab memory controller is way bigger than a pointer > > > per page. > > > > How much of the memory are we talking about here? > > Just to be more specific. Your cover mentions several hundreds of MBs > but there is no scale to the overal charged memory. How much of that is > the actual kmem accounted memory. As I wrote, on average it saves 30-45% of slab memory. The smallest number I've seen was about 15%, the largest over 60%. The amount of slab memory isn't a very stable metrics in general: it heavily depends on workload pattern, memory pressure, uptime etc. In absolute numbers I've seen savings from ~60 Mb for an empty vm to more than 2 Gb for some production workloads. Btw, please note that after a recent change from Vlastimil 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting") slab counters are including large allocations which are passed directly to the page allocator. It will makes memory savings smaller in percents, but of course not in absolute numbers. > > > Also is there any pattern for specific caches that tend to utilize > > much worse than others? Caches which usually have many objects (e.g. inodes) initially have a better utilization, but as some of them are getting reclaimed the utilization drops. And if the cgroup is already dead, no one can reuse these mostly slab empty pages, so it's pretty wasteful. So I don't think the problem is specific to any cache, it's pretty general.