Re: [PATCH 20/35] arm64: mte: Add in-kernel MTE helpers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Catalin,

On 8/27/20 10:38 AM, Catalin Marinas wrote:
> On Fri, Aug 14, 2020 at 07:27:02PM +0200, Andrey Konovalov wrote:
>> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
>> index 1c99fcadb58c..733be1cb5c95 100644
>> --- a/arch/arm64/include/asm/mte.h
>> +++ b/arch/arm64/include/asm/mte.h
>> @@ -5,14 +5,19 @@
>>  #ifndef __ASM_MTE_H
>>  #define __ASM_MTE_H
>>  
>> -#define MTE_GRANULE_SIZE	UL(16)
>> +#include <asm/mte_asm.h>
> 
> So the reason for this move is to include it in asm/cache.h. Fine by
> me but...
> 
>>  #define MTE_GRANULE_MASK	(~(MTE_GRANULE_SIZE - 1))
>>  #define MTE_TAG_SHIFT		56
>>  #define MTE_TAG_SIZE		4
>> +#define MTE_TAG_MASK		GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT)
>> +#define MTE_TAG_MAX		(MTE_TAG_MASK >> MTE_TAG_SHIFT)
> 
> ... I'd rather move all these definitions in a file with a more
> meaningful name like mte-def.h. The _asm implies being meant for .S
> files inclusion which isn't the case.
> 

mte-asm.h was originally called mte_helper.h hence it made sense to have these
defines here. But I agree with your proposal it makes things more readable and
it is in line with the rest of the arm64 code (e.g. page-def.h).

We should as well update the commit message accordingly.

>> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
>> index eb39504e390a..e2d708b4583d 100644
>> --- a/arch/arm64/kernel/mte.c
>> +++ b/arch/arm64/kernel/mte.c
>> @@ -72,6 +74,47 @@ int memcmp_pages(struct page *page1, struct page *page2)
>>  	return ret;
>>  }
>>  
>> +u8 mte_get_mem_tag(void *addr)
>> +{
>> +	if (system_supports_mte())
>> +		addr = mte_assign_valid_ptr_tag(addr);
> 
> The mte_assign_valid_ptr_tag() is slightly misleading. All it does is
> read the allocation tag from memory.
> 
> I also think this should be inline asm, possibly using alternatives.
> It's just an LDG instruction (and it saves us from having to invent a
> better function name).
> 

Yes, I agree, I implemented this code in the early days and never got around to
refactor it.

>> +
>> +	return 0xF0 | mte_get_ptr_tag(addr);
>> +}
>> +
>> +u8 mte_get_random_tag(void)
>> +{
>> +	u8 tag = 0xF;
>> +
>> +	if (system_supports_mte())
>> +		tag = mte_get_ptr_tag(mte_assign_random_ptr_tag(NULL));
> 
> Another alternative inline asm with an IRG instruction.
> 

As per above.

>> +
>> +	return 0xF0 | tag;
>> +}
>> +
>> +void * __must_check mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
>> +{
>> +	void *ptr = addr;
>> +
>> +	if ((!system_supports_mte()) || (size == 0))
>> +		return addr;
>> +
>> +	tag = 0xF0 | (tag & 0xF);
>> +	ptr = (void *)__tag_set(ptr, tag);
>> +	size = ALIGN(size, MTE_GRANULE_SIZE);
> 
> I think aligning the size is dangerous. Can we instead turn it into a
> WARN_ON if not already aligned? At a quick look, the callers of
> kasan_{un,}poison_memory() already align the size.
> 

The size here is used only for tagging purposes and if we want to tag a
subgranule amount of memory we end up tagging the granule anyway. Why do you
think it can be dangerous?

Anyway I agree on the fact that is seems redundant, a WARN_ON here should be
sufficient.

>> +
>> +	mte_assign_mem_tag_range(ptr, size);
>> +
>> +	/*
>> +	 * mte_assign_mem_tag_range() can be invoked in a multi-threaded
>> +	 * context, ensure that tags are written in memory before the
>> +	 * reference is used.
>> +	 */
>> +	smp_wmb();
>> +
>> +	return ptr;
> 
> I'm not sure I understand the barrier here. It ensures the relative
> ordering of memory (or tag) accesses on a CPU as observed by other CPUs.
> While the first access here is setting the tag, I can't see what other
> access on _this_ CPU it is ordered with.
> 

You are right it can be removed. I was just overthinking here.

>> +}
>> +
>>  static void update_sctlr_el1_tcf0(u64 tcf0)
>>  {
>>  	/* ISB required for the kernel uaccess routines */
>> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
>> index 03ca6d8b8670..8c743540e32c 100644
>> --- a/arch/arm64/lib/mte.S
>> +++ b/arch/arm64/lib/mte.S
>> @@ -149,3 +149,44 @@ SYM_FUNC_START(mte_restore_page_tags)
>>  
>>  	ret
>>  SYM_FUNC_END(mte_restore_page_tags)
>> +
>> +/*
>> + * Assign pointer tag based on the allocation tag
>> + *   x0 - source pointer
>> + * Returns:
>> + *   x0 - pointer with the correct tag to access memory
>> + */
>> +SYM_FUNC_START(mte_assign_valid_ptr_tag)
>> +	ldg	x0, [x0]
>> +	ret
>> +SYM_FUNC_END(mte_assign_valid_ptr_tag)
>> +
>> +/*
>> + * Assign random pointer tag
>> + *   x0 - source pointer
>> + * Returns:
>> + *   x0 - pointer with a random tag
>> + */
>> +SYM_FUNC_START(mte_assign_random_ptr_tag)
>> +	irg	x0, x0
>> +	ret
>> +SYM_FUNC_END(mte_assign_random_ptr_tag)
> 
> As I said above, these two can be inline asm.
> 

Agreed.

>> +
>> +/*
>> + * Assign allocation tags for a region of memory based on the pointer tag
>> + *   x0 - source pointer
>> + *   x1 - size
>> + *
>> + * Note: size is expected to be MTE_GRANULE_SIZE aligned
>> + */
>> +SYM_FUNC_START(mte_assign_mem_tag_range)
>> +	/* if (src == NULL) return; */
>> +	cbz	x0, 2f
>> +	/* if (size == 0) return; */
> 
> You could skip the cbz here and just document that the size should be
> non-zero and aligned. The caller already takes care of this check.
>

I would prefer to keep the check here, unless there is a valid reason, since
allocate(0) is a viable option hence tag(x, 0) should be as well. The caller
takes care of it in one place, today, but I do not know where the API will be
used in future.

>> +	cbz	x1, 2f
>> +1:	stg	x0, [x0]
>> +	add	x0, x0, #MTE_GRANULE_SIZE
>> +	sub	x1, x1, #MTE_GRANULE_SIZE
>> +	cbnz	x1, 1b
>> +2:	ret
>> +SYM_FUNC_END(mte_assign_mem_tag_range)
> 

-- 
Regards,
Vincenzo




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux