On 23/04/18 08:16, Heiher wrote:
Hi,
IIRC, The v1 is a temporary register, value is not preserved across
function calls.
v1 is conventionally used for a function return value and as such can be
changed by called functions. However, bzero is called from inline
assembly and v1 is not in the clobbers list
https://elixir.bootlin.com/linux/v4.17-rc1/source/arch/mips/include/asm/uaccess.h#L652
So the calling function does not expect that register to have been used
and can legitimately expect its value to remain after the function call,
which without this patch, it does not - as demonstrated by the test code.
Thanks,
Matt
I don't see any functions that generated by compiler to restore values
of v1 after clobbered it.
On Sun, Apr 22, 2018 at 9:54 PM, Greg Kroah-Hartman
<gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Redfearn <matt.redfearn@xxxxxxxx>
commit c96eebf07692e53bf4dd5987510d8b550e793598 upstream.
The label .Llast_fixup\@ is jumped to on page fault within the final
byte set loop of memset (on < MIPSR6 architectures). For some reason, in
this fault handler, the v1 register is randomly set to a2 & STORMASK.
This clobbers v1 for the calling function. This can be observed with the
following test code:
static int __init __attribute__((optimize("O0"))) test_clear_user(void)
{
register int t asm("v1");
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
t = 0xa5a5a5a5;
if ((k = clear_user(test + PAGE_SIZE - 256, j)) != j - 256) {
pr_err("clear_user (%px %d) returned %d\n", test + PAGE_SIZE - 256, j, k);
}
if (t != 0xa5a5a5a5) {
pr_err("v1 was clobbered to 0x%x!\n", t);
}
}
return 0;
}
late_initcall(test_clear_user);
Which demonstrates that v1 is indeed clobbered (MIPS64):
Testing clear_user
v1 was clobbered to 0x1!
v1 was clobbered to 0x2!
v1 was clobbered to 0x3!
v1 was clobbered to 0x4!
v1 was clobbered to 0x5!
v1 was clobbered to 0x6!
v1 was clobbered to 0x7!
Since the number of bytes that could not be set is already contained in
a2, the andi placing a value in v1 is not necessary and actively
harmful in clobbering v1.
Reported-by: James Hogan <jhogan@xxxxxxxxxx>
Signed-off-by: Matt Redfearn <matt.redfearn@xxxxxxxx>
Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx>
Cc: linux-mips@xxxxxxxxxxxxxx
Cc: stable@xxxxxxxxxxxxxxx
Patchwork: https://patchwork.linux-mips.org/patch/19109/
Signed-off-by: James Hogan <jhogan@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/mips/lib/memset.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -210,7 +210,7 @@
.Llast_fixup\@:
jr ra
- andi v1, a2, STORMASK
+ nop
.Lsmall_fixup\@:
PTR_SUBU a2, t1, a0