Re: [PATCH] memblock: fix section mismatch warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 25, 2021 at 4:08 PM Mike Rapoport <rppt@xxxxxxxxxx> wrote:
> On Thu, Feb 25, 2021 at 03:06:27PM +0100, Arnd Bergmann wrote:
> > On Thu, Feb 25, 2021 at 2:47 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
> > >
> > > (I don't see why to not inline that function, but I am obviously not a
> > > compiler person :) )
> >
> > Looking at the assembler output in the arm64 build that triggered the
> > warning, I see this code:
>
> "push %rbp" seems more x86 for me, but that's not really important :)

I suppose the relocation names like "R_X86_64_32S" and the command
line I used should could have told me the same ;-)

> I wonder what happens with other memblock inline APIs, particularly with
> alloc wrappers. Do they still get inlined?

Trying the same configuration here, with all the allocation functions
marked __init again, they all get inlined by clang, regardless of the
'__init' and 'inline' and '__always_inline' tags.

With gcc-7 and gcc-10 one instance of the plain 'memblock_alloc' does not
get fully inlined if I revert the __always_inline back to plain inline:

        .type   memblock_alloc.constprop.0, @function
memblock_alloc.constprop.0:
.LASANPC4090:
        pushq   %rbp    #
# include/linux/memblock.h:407: static inline __init void
*memblock_alloc(phys_addr_t size, phys_addr_t align)
        movq    %rdi, %rbp      # tmp84, size
# include/linux/memblock.h:409:    return memblock_alloc_try_nid(size,
align, MEMBLOCK_LOW_LIMIT,
        call    __sanitizer_cov_trace_pc        #
        movq    %rbp, %rdi      # size,
        orl     $-1, %r8d       #,
        xorl    %ecx, %ecx      #
        xorl    %edx, %edx      #
        movl    $4096, %esi     #,
# include/linux/memblock.h:411: }
        popq    %rbp    #
# include/linux/memblock.h:409:    return memblock_alloc_try_nid(size,
align, MEMBLOCK_LOW_LIMIT,
        jmp     memblock_alloc_try_nid  #
        .size   memblock_alloc.constprop.0, .-memblock_alloc.constprop.0

Apparently, this is an optimization for code size, as there are
multiple callers in
kernel/dma/swiotlb.c and it can move the call to __sanitizer_cov_trace_pc into
a single place here.

       Arnd




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux