Re: [RFC PATCH bpf-next 1/3] arm64: insn: Fix two bugs in encoding 32-bit logical immediates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Luke,

Thanks a lot for nailing these bugs.

On Wed,  6 May 2020 18:05:01 -0700
Luke Nelson <lukenels@xxxxxxxxxxxxxxxxx> wrote:

> This patch fixes two issues present in the current function for encoding
> arm64 logical immediates when using the 32-bit variants of instructions.
> 
> First, the code does not correctly reject an all-ones 32-bit immediate
> and returns an undefined instruction encoding, which can crash the kernel.

You make it sound more dramatic than it needs to be! ;-) As you pointed
out below, nothing in the kernel calls this code to encode a 32bit
immediate, so triggering a crash is not possible (unless you manage to
exploit something else to call into this code). It definitely needs
fixing though!

> The fix is to add a check for this case.
> 
> Second, the code incorrectly rejects some 32-bit immediates that are
> actually encodable as logical immediates. The root cause is that the code
> uses a default mask of 64-bit all-ones, even for 32-bit immediates. This
> causes an issue later on when the mask is used to fill the top bits of
> the immediate with ones, shown here:
> 
>   /*
>    * Pattern: 0..01..10..01..1
>    *
>    * Fill the unused top bits with ones, and check if
>    * the result is a valid immediate (all ones with a
>    * contiguous ranges of zeroes).
>    */
>   imm |= ~mask;
>   if (!range_of_ones(~imm))
>           return AARCH64_BREAK_FAULT;
> 
> To see the problem, consider an immediate of the form 0..01..10..01..1,
> where the upper 32 bits are zero, such as 0x80000001. The code checks
> if ~(imm | ~mask) contains a range of ones: the incorrect mask yields
> 1..10..01..10..0, which fails the check; the correct mask yields
> 0..01..10..0, which succeeds.
> 
> The fix is to use a 32-bit all-ones default mask for 32-bit immediates.

Paging this thing back in is really hard (I only had one coffee, more
needed). Yes, I see what you mean. Duh! I think this only happens if
mask hasn't been adjusted by the "pattern spotting" code the first place
though.

> 
> Currently, the only user of this function is in
> arch/arm64/kvm/va_layout.c, which uses 64-bit immediates and won't
> trigger these bugs.
> 
> We tested the new code against llvm-mc with all 1,302 encodable 32-bit
> logical immediates and all 5,334 encodable 64-bit logical immediates.

That, on its own, is awesome information. Do you have any pointer on
how to set this up?

> 
> Fixes: ef3935eeebff ("arm64: insn: Add encoder for bitwise operations using literals")
> Co-developed-by: Xi Wang <xi.wang@xxxxxxxxx>
> Signed-off-by: Xi Wang <xi.wang@xxxxxxxxx>
> Signed-off-by: Luke Nelson <luke.r.nels@xxxxxxxxx>
> ---
>  arch/arm64/kernel/insn.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 4a9e773a177f..42fad79546bb 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1535,7 +1535,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>  				    u32 insn)
>  {
>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
> -	u64 mask = ~0UL;
> +	u64 mask;
>  
>  	/* Can't encode full zeroes or full ones */
>  	if (!imm || !~imm)
> @@ -1543,13 +1543,15 @@ static u32 aarch64_encode_immediate(u64 imm,
>  
>  	switch (variant) {
>  	case AARCH64_INSN_VARIANT_32BIT:
> -		if (upper_32_bits(imm))
> +		if (upper_32_bits(imm) || imm == 0xffffffffUL)

nit: I don't like the fact that this create a small dissymmetry in the
way we check things (we start by checking !~imm, which is not relevant
to 32bit constants).

>  			return AARCH64_BREAK_FAULT;
>  		esz = 32;
> +		mask = 0xffffffffUL;
>  		break;
>  	case AARCH64_INSN_VARIANT_64BIT:
>  		insn |= AARCH64_INSN_SF_BIT;
>  		esz = 64;
> +		mask = ~0UL;

I'd rather we generate the mask in a programmatic way, which is pretty
easy to do since we have the initial element size.

>  		break;
>  	default:
>  		pr_err("%s: unknown variant encoding %d\n", __func__, variant);

To account for the above remarks, I came up with the following patch:

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 4a9e773a177f..422bf9a79ed6 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1535,11 +1535,7 @@ static u32 aarch64_encode_immediate(u64 imm,
 				    u32 insn)
 {
 	unsigned int immr, imms, n, ones, ror, esz, tmp;
-	u64 mask = ~0UL;
-
-	/* Can't encode full zeroes or full ones */
-	if (!imm || !~imm)
-		return AARCH64_BREAK_FAULT;
+	u64 mask;
 
 	switch (variant) {
 	case AARCH64_INSN_VARIANT_32BIT:
@@ -1556,6 +1552,11 @@ static u32 aarch64_encode_immediate(u64 imm,
 		return AARCH64_BREAK_FAULT;
 	}
 
+	/* Can't encode full zeroes or full ones */
+	mask = GENMASK_ULL(esz - 1, 0);
+	if (!imm || !(~imm & mask))
+		return AARCH64_BREAK_FAULT;
+
 	/*
 	 * Inverse of Replicate(). Try to spot a repeating pattern
 	 * with a pow2 stride.

which is of course completely untested (it does compile though).

Thoughts?

	M.
-- 
Jazz is not dead. It just smells funny...



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux