Re: [PATCH v3 31/35] lib: add memory allocations report in show_mem()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 15 Feb 2024 18:51:41 -0500
Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote:

> Most of that is data (505024), not text (68582, or 66k).
> 

And the 4K extra would have been data too.

> The data is mostly the alloc tags themselves (one per allocation
> callsite, and you compiled the entire kernel), so that's expected.
> 
> Of the text, a lot of that is going to be slowpath stuff - module load
> and unload hooks, formatt and printing the output, other assorted bits.
> 
> Then there's Allocation and deallocating obj extensions vectors - not
> slowpath but not super fast path, not every allocation.
> 
> The fastpath instruction count overhead is pretty small
>  - actually doing the accounting - the core of slub.c, page_alloc.c,
>    percpu.c
>  - setting/restoring the alloc tag: this is overhead we add to every
>    allocation callsite, so it's the most relevant - but it's just a few
>    instructions.
> 
> So that's the breakdown. Definitely not zero overhead, but that fixed
> memory overhead (and additionally, the percpu counters) is the price we
> pay for very low runtime CPU overhead.

But where are the benchmarks that are not micro-benchmarks. How much
overhead does this cause to those? Is it in the noise, or is it noticeable?

-- Steve




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux