Re: [PATCH] x86_64: Change .weak to SYM_FUNC_START_WEAK for arch/x86/lib/mem*_64.S

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29. 10. 20, 19:05, Fangrui Song wrote:
Commit 393f203f5fd5 ("x86_64: kasan: add interceptors for
memset/memmove/memcpy functions") added .weak directives to
arch/x86/lib/mem*_64.S instead of changing the existing SYM_FUNC_START_*
macros.

There were no SYM_FUNC_START_* macros in 2015.

This can lead to the assembly snippet `.weak memcpy ... .globl
memcpy`

SYM_FUNC_START_LOCAL(memcpy)
does not add .globl, so I don't understand the rationale.

which will produce a STB_WEAK memcpy with GNU as but STB_GLOBAL
memcpy with LLVM's integrated assembler before LLVM 12. LLVM 12 (since
https://reviews.llvm.org/D90108) will error on such an overridden symbol
binding.

Use the appropriate SYM_FUNC_START_WEAK instead.

Fixes: 393f203f5fd5 ("x86_64: kasan: add interceptors for memset/memmove/memcpy functions")
Reported-by: Sami Tolvanen <samitolvanen@xxxxxxxxxx>
Signed-off-by: Fangrui Song <maskray@xxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
---
  arch/x86/lib/memcpy_64.S  | 4 +---
  arch/x86/lib/memmove_64.S | 4 +---
  arch/x86/lib/memset_64.S  | 4 +---
  3 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 037faac46b0c..1e299ac73c86 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -16,8 +16,6 @@
   * to a jmp to memcpy_erms which does the REP; MOVSB mem copy.
   */
-.weak memcpy
-
  /*
   * memcpy - Copy a memory block.
   *
@@ -30,7 +28,7 @@
   * rax original destination
   */
  SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_LOCAL(memcpy)
+SYM_FUNC_START_WEAK(memcpy)
  	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
  		      "jmp memcpy_erms", X86_FEATURE_ERMS
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 7ff00ea64e4f..41902fe8b859 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,9 +24,7 @@
   * Output:
   * rax: dest
   */
-.weak memmove
-
-SYM_FUNC_START_ALIAS(memmove)
+SYM_FUNC_START_WEAK(memmove)
  SYM_FUNC_START(__memmove)
mov %rdi, %rax
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 9ff15ee404a4..0bfd26e4ca9e 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -6,8 +6,6 @@
  #include <asm/alternative-asm.h>
  #include <asm/export.h>
-.weak memset
-
  /*
   * ISO C memset - set a memory block to a byte value. This function uses fast
   * string to get better performance than the original function. The code is
@@ -19,7 +17,7 @@
   *
   * rax   original destination
   */
-SYM_FUNC_START_ALIAS(memset)
+SYM_FUNC_START_WEAK(memset)
  SYM_FUNC_START(__memset)
  	/*
  	 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended



--
js
suse labs



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux