On Fri, Dec 16, 2022 at 3:03 PM Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> wrote: > > On 2022/12/15 18:36, Geert Uytterhoeven wrote: > > The next line is: > > > > scr_memsetw(save, erase, array3_size(logo_lines, new_cols, 2)); > > > > So how can this turn out to be uninitialized later below? > > > > scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2)); > > > > What am I missing? > > Good catch. It turned out that this was a KMSAN problem (i.e. a false positive report). > > On x86_64, scr_memsetw() is implemented as > > static inline void scr_memsetw(u16 *s, u16 c, unsigned int count) > { > memset16(s, c, count / 2); > } > > and memset16() is implemented as > > static inline void *memset16(uint16_t *s, uint16_t v, size_t n) > { > long d0, d1; > asm volatile("rep\n\t" > "stosw" > : "=&c" (d0), "=&D" (d1) > : "a" (v), "1" (s), "0" (n) > : "memory"); > return s; > } > > . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset() > but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization > via memsetXX() results in KMSAN's shadow memory being not updated. > > KMSAN folks, how should we fix this problem? > Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled? > I think the easiest way to fix it would be disable memsetXX asm implementations by something like: ------------------------------------------------------------------------------------------------- diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 888731ccf1f67..5fb330150a7d1 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n); #endif void *__memset(void *s, int c, size_t n); +#if !defined(__SANITIZE_MEMORY__) #define __HAVE_ARCH_MEMSET16 static inline void *memset16(uint16_t *s, uint16_t v, size_t n) { @@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) : "memory"); return s; } +#endif #define __HAVE_ARCH_MEMMOVE #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY) ------------------------------------------------------------------------------------------------- This way we'll just pick the existing C implementations instead of reinventing them. -- Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Straße, 33 80636 München Geschäftsführer: Paul Manicle, Liana Sebastian Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg