Le 20/08/2021 à 09:49, Michael Ellerman a écrit :
Kees Cook <keescook@xxxxxxxxxxxx> writes:
In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memset(), avoid intentionally writing across
neighboring fields.
Add a struct_group() for the spe registers so that memset() can correctly reason
about the size:
In function 'fortify_memset_chk',
inlined from 'restore_user_regs.part.0' at arch/powerpc/kernel/signal_32.c:539:3:
include/linux/fortify-string.h:195:4: error: call to '__write_overflow_field' declared with attribute warning: detected write beyond size of field (1st parameter); maybe use struct_group()? [-Werror=attribute-warning]
195 | __write_overflow_field();
| ^~~~~~~~~~~~~~~~~~~~~~~~
Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
Cc: Paul Mackerras <paulus@xxxxxxxxx>
Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
Cc: Sudeep Holla <sudeep.holla@xxxxxxx>
Cc: linuxppc-dev@xxxxxxxxxxxxxxxx
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
---
arch/powerpc/include/asm/processor.h | 6 ++++--
arch/powerpc/kernel/signal_32.c | 6 +++---
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index f348e564f7dd..05dc567cb9a8 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -191,8 +191,10 @@ struct thread_struct {
int used_vsr; /* set if process has used VSX */
#endif /* CONFIG_VSX */
#ifdef CONFIG_SPE
- unsigned long evr[32]; /* upper 32-bits of SPE regs */
- u64 acc; /* Accumulator */
+ struct_group(spe,
+ unsigned long evr[32]; /* upper 32-bits of SPE regs */
+ u64 acc; /* Accumulator */
+ );
unsigned long spefscr; /* SPE & eFP status */
unsigned long spefscr_last; /* SPEFSCR value on last prctl
call or trap return */
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 0608581967f0..77b86caf5c51 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -532,11 +532,11 @@ static long restore_user_regs(struct pt_regs *regs,
regs_set_return_msr(regs, regs->msr & ~MSR_SPE);
if (msr & MSR_SPE) {
/* restore spe registers from the stack */
- unsafe_copy_from_user(current->thread.evr, &sr->mc_vregs,
- ELF_NEVRREG * sizeof(u32), failed);
+ unsafe_copy_from_user(¤t->thread.spe, &sr->mc_vregs,
+ sizeof(current->thread.spe), failed);
This makes me nervous, because the ABI is that we copy ELF_NEVRREG *
sizeof(u32) bytes, not whatever sizeof(current->thread.spe) happens to
be.
ie. if we use sizeof an inadvertent change to the fields in
thread_struct could change how many bytes we copy out to userspace,
which would be an ABI break.
And that's not that hard to do, because it's not at all obvious that the
size and layout of fields in thread_struct affects the user ABI.
At the same time we don't want to copy the right number of bytes but
the wrong content, so from that point of view using sizeof is good :)
The way we handle it in ptrace is to have BUILD_BUG_ON()s to verify that
things match up, so maybe we should do that here too.
ie. add:
BUILD_BUG_ON(sizeof(current->thread.spe) == ELF_NEVRREG * sizeof(u32));
You mean != I guess ?
Not sure if you are happy doing that as part of this patch. I can always
do it later if not.
cheers