Re: [PATCH v2 2/3] alloc_tag: uninline code gated by mem_alloc_profiling_key in slab allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/2/25 00:18, Suren Baghdasaryan wrote:
> When a sizable code section is protected by a disabled static key, that
> code gets into the instruction cache even though it's not executed and
> consumes the cache, increasing cache misses. This can be remedied by
> moving such code into a separate uninlined function.
> On a Pixel6 phone, slab allocation profiling overhead measured with
> CONFIG_MEM_ALLOC_PROFILING=y and profiling disabled is:
> 
>              baseline             modified
> Big core     3.31%                0.17%
> Medium core  3.79%                0.57%
> Little core  6.68%                1.28%
> 
> This improvement comes at the expense of the configuration when profiling
> gets enabled, since there is now an additional function call. The overhead
> from this additional call on Pixel6 is:
> 
> Big core     0.66%
> Middle core  1.23%
> Little core  2.42%
> 
> However this is negligible when compared with the overall overhead of the
> memory allocation profiling when it is enabled.
> On x86 this patch does not make noticeable difference because the overhead
> with mem_alloc_profiling_key disabled is much lower (under 1%) to start
> with, so any improvement is less visible and hard to distinguish from the
> noise. The overhead from additional call when profiling is enabled is also
> within noise levels.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux